bookkeeper-release-4.2.4/000077500000000000000000000000001244507361200152715ustar00rootroot00000000000000bookkeeper-release-4.2.4/CHANGES.txt000066400000000000000000001304321244507361200171050ustar00rootroot00000000000000Release 4.2.4 - 2015-01-12 Backward compatible changes: BUGFIXES: BOOKKEEPER-815: Ledger fence state is lost when the ledger file is evicted (Charles Xie via ivank) BOOKKEEPER-799: Distribution schedule coverage sets don't take gaps in response lists into account when writequorum > ackquorum (ivank) BOOKKEEPER-795: Race condition causes writes to hang if ledger is fenced (sijie via ivank) IMPROVEMENTS: BOOKKEEPER-800: Expose whether a ledger is closed or not (ivank) Release 4.2.3 - 2014-06-27 Backward compatible changes: BUGFIXES: BOOKKEEPER-766: Update notice.txt files to include 2014 (ivank via fpj) BOOKKEEPER-767: Allow loopback in tests (ivank via fpj) BOOKKEEPER-765: bookkeeper script should fall back to java in path if JAVA_HOME is not set (ivank) bookkeeper-server: BOOKKEEPER-711: bookkeeper-daemon.sh will not remove the pid file one successful stop (vinay via sijie) BOOKKEEPER-712: bookkeeper script should use 'java' from JAVA_HOME (vinay via sijie) BOOKKEEPER-688: NPE exception in PerChannelBookieClient (ivank via sijie) BOOKKEEPER-602: we should have request timeouts rather than channel timeout in PerChannelBookieClient (Aniruddha via sijie) BOOKKEEPER-714: Logging channel exceptions in PerChannelBookieClient (sijie) BOOKKEEPER-726: PerChannelBookieClient should print address that it failed to connect to when it fails to correct (ivank via sijie) BOOKKEEPER-710: OpenLedgerNoRecovery should watch ensemble change. (sijie, ivank via fpj) BOOKKEEPER-742: Fix for empty ledgers losing quorum. (ivank) BOOKKEEPER-743: Periodic ledger check running too often as doc doesn't match implementation. (ivank) BOOKKEEPER-744: Run the auditor bookie check periodically (ivank) BOOKKEEPER-755: Incorrect number of seconds specified in a day (Joseph Redfern via fpj) BOOKKEEPER-752: Deadlock on NIOServer shutdown (sijie via ivank) BOOKKEEPER-673: Ledger length can be inaccurate in failure case (sijie via ivank) BOOKKEEPER-751: Ensure all the bookkeeper callbacks not run under ledger handle lock (sijie via ivank) BOOKKEEPER-745: Fix for false reports of ledger unreplication during rolling restarts. (ivank) BOOKKEEPER-708: Shade protobuf library to avoid incompatible versions (ivank) BOOKKEEPER-730: Shade pom file missing apache license header (ivank) BOOKKEEPER-725: AutoRecoveryMain should exit with error code if deathwatcher finds dead thread (ivank) BOOKKEEPER-750: Flake in BookieAutoRecoveryTest#testEmptyLedgerLosesQuorumEventually (ivank) IMPROVEMENT: BOOKKEEPER-747: Implement register/unregister LedgerMetadataListener in MSLedgerManagerFactory (fpj via sijie) BOOKKEEPER-746: 5 new shell commands. List ledgers, list metadata, list underreplicated, show auditor and simpletest (ivank) Release 4.2.2 - 2013-10-02 Backward compatible changes: BUGFIXES: BOOKKEEPER-635: jenkins build should highlight which lines of the patch cause raw analysis errors (ivank via sijie) BOOKKEEPER-684: ZK logging is oververbose, can cause oom in tests (ivank via fpj) bookkeeper-server: BOOKKEEPER-559: Fix occasional failure in AuditorBookieTest (ivank) BOOKKEEPER-556: BookieServerMXBean#getServerState makes no sense (ivank) BOOKKEEPER-583: Read from a ReadOnlyBookie fails if index fileinfo is not in ledger cache (vinay via sijie) BOOKKEEPER-585: Auditor logs noisily when a ledger has been deleted (ivank) BOOKKEEPER-581: Ledger recovery doesn't work correctly when recovery adds force changing ensembles. (sijie via ivank) BOOKKEEPER-595: Crash of inprocess autorecovery daemon should not take down the bookie (ivank) BOOKKEEPER-596: Ledgers are gc'ed by mistake in MSLedgerManagerFactory. (sijie via ivank) BOOKKEEPER-584: Data loss when ledger metadata is overwritten (sijie via ivank) BOOKKEEPER-577: BookieFailureTest uses sync/wait()/notify() incorrectly (ivank) BOOKKEEPER-626: BOOKIE_EXTRA_OPTS are added twice (vinay via fpj) BOOKKEEPER-619: Bookie should not create local cookie files if zookeeper is uninitialized (ivank) BOOKKEEPER-313: Bookkeeper shutdown call from Bookie thread is not shutting down server (vinay via ivank) BOOKKEEPER-623: LedgerChecker should avoid segments of closed ledger with higher start entryId than closed entry. (vinay via sijie) BOOKKEEPER-620: PerChannelBookieClient race during channel disconnect (ivank) BOOKKEEPER-637: NoSuchEntry exception when reading an entry from a bookie should not print ERROR level message (mmerli via ivank) BOOKKEEPER-257: Ability to list all ledgers (fpj via ivank) BOOKKEEPER-636: Latest txn logs might be deleted in a race condition which is not recoverable if BK goes down before next txn log created. (vinay via ivank) BOOKKEEPER-621: NPE in FileInfo.moveToNewLocation (ivank via sijie) BOOKKEEPER-646: BookieShell readjournal command is throwing BufferUnderflowException (Rakesh via sijie) BOOKKEEPER-652: Logger class name is wrong in LedgerCacheImpl.java (Rakesh via sijie) BOOKKEEPER-625: On OutOfMemoryError in NIOServerFactory thread bookie should shutdown (vinay via ivank) BOOKKEEPER-642: Bookie returns incorrect exitcode, ExitCode.ZK_REG_FAIL is getting overridden (Rakesh via ivank) BOOKKEEPER-663: HierarchicalLedgerManager iterator is missing some ranges and the last ledger in the range (mmerli via ivank) BOOKKEEPER-604: Ledger storage can log an exception if GC happens concurrently. (sijie & ivank via ivank) BOOKKEEPER-667: Client write will fail with BadMetadataVersion in case of multiple Bookie failures with AutoRecovery enabled (sijie via ivank) BOOKKEEPER-668: Race between PerChannelBookieClient#channelDisconnected() and disconnect() calls can make clients hang while add/reading entries in case of multiple bookie failures (sijie & ivank via ivank) BOOKKEEPER-624: Reduce logs generated by ReplicationWorker (vinay via ivank) BOOKKEEPER-660: Logs too noisy on NIOServerFactory when client drops a connection (mmerli via ivank) BOOKKEEPER-632: AutoRecovery should consider read only bookies (vinay via ivank) BOOKKEEPER-649: Race condition in sync ZKUtils.createFullPathOptimistic() (ivank) BOOKKEEPER-580: improve close logic (sijie & ivank via ivank) BOOKKEEPER-664: Compaction increases latency on journal writes (ivank & sijie via ivank) BOOKKEEPER-679: Bookie should exit with non-zero if NIOServer crashes with Error (ivank) BOOKKEEPER-669: Race condition in ledger deletion and eviction from cache (rakeshr via ivank) BOOKKEEPER-446: BookKeeper.createLedger(..) should not mask the error with ZKException (sijie via ivank) BOOKKEEPER-675: Log noise fixup before cutting 4.2.2 (ivank) BOOKKEEPER-627: LedgerDirsMonitor is missing thread name (rakeshr via ivank) BOOKKEEPER-685: Race in compaction algorithm from BOOKKEEPER-664 (ivank) hedwig-server: BOOKKEEPER-579: TestSubAfterCloseSub was put in a wrong package (sijie via ivank) BOOKKEEPER-601: readahead cache size isn't updated correctly (sijie via fpj) BOOKKEEPER-607: Filtered Messages Require ACK from Client Causes User Being Throttled Incorrectly Forever (sijie via ivank) BOOKKEEPER-683: TestSubAfterCloseSub fails on 4.2 (jiannan via ivank) hedwig-client: BOOKKEEPER-598: Fails to compile - RESUBSCRIBE_EXCEPTION conflict (Matthew Farrellee via ivank) BOOKKEEPER-603: Support Boost 1.53 for Hedwig Cpp Client (jiannan via ivank) BOOKKEEPER-600: shouldClaim flag isn't cleared for hedwig multiplex java client (sijie via fpj) NEW FEATURE: BOOKKEEPER-562: Ability to tell if a ledger is closed or not (fpj via ivank) IMPROVEMENT: BOOKKEEPER-618: Better resolution of bookie address (ivank via fpj) Release 4.2.1 - 2013-02-19 Backward compatible changes: BUGFIXES: bookkeeper-server: BOOKKEEPER-567: ReadOnlyBookieTest hangs on shutdown (sijie via ivank) BOOKKEEPER-549: Documentation missed for readOnlyMode support (ivank) BOOKKEEPER-548: Document about periodic ledger checker configuration (ivank) BOOKKEEPER-554: fd leaking when move ledger index file (sijie via ivank) BOOKKEEPER-568: NPE during GC with HierarchicalLedgerManager (Matteo via sijie) BOOKKEEPER-569: Critical performance bug in InterleavedLedgerStorage (ivank via fpj) Release 4.2.0 - 2013-01-14 Non-backward compatible changes: BUGFIXES: IMPROVEMENTS: bookkeeper-server: BOOKKEEPER-203: improve ledger manager interface to remove zookeeper dependency on metadata operations. (sijie via ivank) BOOKKEEPER-303: LedgerMetadata should serialized using protobufs (ivank) hedwig-client: BOOKKEEPER-339: Let hedwig cpp client support returning message seq id for publish requests. (sijie via ivank) Backward compatible changes: BUGFIXES: BOOKKEEPER-289: mvn clean doesn't remove test output files (sijie via ivank) BOOKKEEPER-298: We run with preferIPv4Stack in the scripts but not in the tests (ivank) BOOKKEEPER-292: Test backward compatibility automatically between versions. (ivank) BOOKKEEPER-352: Should not use static ServerStats/BKStats instance in TestServerStats/TestBKStats (sijie via fpj) BOOKKEEPER-338: Create Version.NEW and Version.ANY static instances of Version so that were not passing around nulls (sijie via ivank) BOOKKEEPER-32: Clean up LOG.debug statements (Stu Hood via sijie) BOOKKEEPER-484: Misc fixes for test scripts (ivank via fpj) BOOKKEEPER-483: precommit tests only check toplevel rat file, not the one for submodules. (ivank via fpj) BOOKKEEPER-533: TestSubAfterCloseSub fails strangely in tests (ivank via fpj) BOOKKEEPER-480: Fix javac warnings (ivank via sijie) BOOKKEEPER-481: Fix javadoc warnings (ivank via sijie) bookkeeper-server: BOOKKEEPER-183: Provide tools to read/check data files in bookie server (sijie via ivank) BOOKKEEPER-307: BookieShell introduces 4 findbugs warnings (ivank via sijie) BOOKKEEPER-322: New protobufs generates findbugs errors (ivank) BOOKKEEPER-280: LedgerHandle.addEntry() should return an entryId (mmerli via ivank) BOOKKEEPER-324: Flakeyness in LedgerCreateDeleteTest (ivank) BOOKKEEPER-318: Spelling mistake in MultiCallback log message. (surendra via sijie) BOOKKEEPER-296: It's better provide stop script for bookie (nijel via sijie) BOOKKEEPER-294: Not able to start the bookkeeper before the ZK session timeout. (rakeshr via ivank) BOOKKEEPER-327: System.currentTimeMillis usage in BookKeeper (uma via fpj) BOOKKEEPER-349: Entry logger should close all the chennels which are there in Map, instead of closing only current channel. (umamaheswararao via sijie) BOOKKEEPER-326: DeadLock during ledger recovery (rakeshr via ivank) BOOKKEEPER-372: Check service name in bookie start/stop script. (nijel via ivank) BOOKKEEPER-354: [BOOKKEEPER-296] [Documentation] Modify the bookkeeper start script and document the bookkeeper stop command in bookkeeperConfig.xml (Kiran BC via ivank) BOOKKEEPER-378: ReplicationWorker may not get ZK watcher notification on UnderReplication ledger lock deletion. (umamaheswararao & ivank via ivank) BOOKKEEPER-380: ZkLedgerUnderreplicationManager.markLedgerUnderreplicated() is adding duplicate missingReplicas while multiple bk failed for the same ledger (rakeshr via ivank) BOOKKEEPER-381: ReadLastConfirmedOp's Logger class name is wrong (surendra via sijie) BOOKKEEPER-382: space missed at concatenations in GarbageCollectorThread logging (Brahma via sijie) BOOKKEEPER-337: Add entry fails with MetadataVersionException when last ensemble has morethan one bookie failures (rakeshr via ivank) BOOKKEEPER-376: LedgerManagers should consider 'underreplication' node as a special Znode (Uma via sijie) BOOKKEEPER-384: Clean up LedgerManagerFactory and LedgerManager usage in tests (rakeshr via ivank) BOOKKEEPER-385: replicateLedgerFragment should throw Exceptions in error conditions (umamahesh via ivank) BOOKKEEPER-386: It should not be possible to replicate a ledger fragment which is at the end of an open ledger (ivank & umamahesh via ivank) BOOKKEEPER-395: HDFS dep transitively depends on a busted pom (Stu Hood via sijie) BOOKKEEPER-387: BookKeeper Upgrade is not working. (surendra via sijie) BOOKKEEPER-383: NPE in BookieJournalTest (sijie via ivank) BOOKKEEPER-396: Compilation issue in TestClient.java of BenchMark ( showing this in eclipse) (umamahesh via sijie) BOOKKEEPER-403: ReReadMetadataCb is not executed in the thread responsible for that ledger (ivank) BOOKKEEPER-405: Let's add Thread name for ReplicationWorker thread. (umamahesh via ivank) BOOKKEEPER-418: Store hostname of locker in replication lock (ivank) BOOKKEEPER-417: Hierarchical zk underreplication manager should clean up its hierarchy when done to allow for fast acquisition of underreplicated entries (ivank) BOOKKEEPER-436: Journal#rollLog may leak file handler (umamahesh via ivank) BOOKKEEPER-424: Bookie start is failing intermittently when zkclient connection delays (rakeshr via ivank) BOOKKEEPER-416: LedgerChecker returns underreplicated fragments for an closed ledger with no entries (ivank) BOOKKEEPER-425: Cleanup Bookie id generation (ivank via fpj) BOOKKEEPER-430: Remove manual bookie registration from overview (fpj via ivank) BOOKKEEPER-466: ZooKeeper test utility sets the port number as the tickTime (ivank) BOOKKEEPER-460: LedgerDeleteTest checks wrong place for log file (Fangmin Lv via ivank) BOOKKEEPER-477: In ReadOnlyBookieTest, we should wait for the bookie to die before asserting on it (ivank via fpj) BOOKKEEPER-485: TestFencing hung (ivank via fpj) BOOKKEEPER-351: asyncAddEntry should not throw an exception (Matteo Merli via sijie) BOOKKEEPER-291: BKMBeanRegistry uses log4j directly (fpj via ivank) BOOKKEEPER-459: Rename metastore mock implementation to InMemory implementation (jiannan via ivank) BOOKKEEPER-347: Provide mechanism to detect r-o bookie by the bookie clients (Vinay via ivank) BOOKKEEPER-475: BookieRecoveryTest#testSyncBookieRecoveryToRandomBookiesCheckForDupes() iterates too much (ivank via fpj) BOOKKEEPER-431: Duplicate definition of COOKIES_NODE (uma via fpj) BOOKKEEPER-474: BookieReadWriteTest#testShutdown doesn't make sense (ivank via fpj) BOOKKEEPER-465: CreateNewLog may overwrite lastLogId with smaller value (yixue, fpj via fpj) BOOKKEEPER-498: BookieRecoveryTest.tearDown NPE (fpj) BOOKKEEPER-497: GcLedgersTest has a potential race (ivank via sijie) BOOKKEEPER-493: moveLedgerIndexFile might have chance pickup same directory (sijie via ivank) BOOKKEEPER-365: Ledger will never recover if one of the quorum bookie is down forever and others dont have entry (sijie via ivank) BOOKKEEPER-336: bookie readEntries is taking more time if the ensemble has failed bookie(s) (ivank) BOOKKEEPER-512: BookieZkExpireTest fails periodically (ivank via sijie) BOOKKEEPER-509: TestBookKeeperPersistenceManager failed on latest trunk (sijie via ivank) BOOKKEEPER-496: Ensure that the auditor and replication worker will shutdown if they lose their ZK session (ivank) BOOKKEEPER-500: Fencing doesn't work when restarting bookies. (sijie via ivank) BOOKKEEPER-520: BookieFailureTest hangs on precommit build (ivank via sijie) BOOKKEEPER-447: Bookie can fail to recover if index pages flushed before ledger flush acknowledged (ivank via sijie) BOOKKEEPER-520: BookieFailureTest hangs on precommit build (sijie via fpj, jira reopened) BOOKKEEPER-514: TestDeadLock hanging sometimes (ivank, sijie via fpj) BOOKKEEPER-524: Bookie journal filesystem gets full after SyncThread is terminated with exception (Matteo, fpj via sijie) BOOKKEEPER-355: Ledger recovery will mark ledger as closed with -1, in case of slow bookie is added to ensemble during recovery add (ivank) BOOKKEEPER-534: Flakeyness in AuditorBookieTest (umamahesh via ivank) BOOKKEEPER-542: Remove trailing spaces in IndexCorruptionTest (fpj via ivank) BOOKKEEPER-530: data might be lost during compaction. (ivank) BOOKKEEPER-538: Race condition in BookKeeper#close (ivank via fpj) BOOKKEEPER-408: BookieReadWriteTest will enter the endless loop and will not leave out (ivank) BOOKKEEPER-504: Fix findbugs warning in PendingReadOp (fpj via ivank) hedwig-protocol: BOOKKEEPER-394: CompositeException message is not useful (Stu Hood via sijie) BOOKKEEPER-468: Remove from protobuf generation in hedwig (ivank) hedwig-client: BOOKKEEPER-274: Hedwig cpp client library should not link to cppunit which is just used for test. (sijie via ivank) BOOKKEEPER-320: Let hedwig cpp client could publish messages using Message object instead of string. (jiannan via ivank) BOOKKEEPER-371: NPE in hedwig hub client causes hedwig hub to shut down. (Aniruddha via sijie) BOOKKEEPER-392: Racey ConcurrentMap usage in java hedwig-client (Stu Hood via sijie) BOOKKEEPER-427: TestConcurrentTopicAcquisition hangs every so often (ivank) BOOKKEEPER-434: [Hedwig CPP Client] Delay resolving default host until necessary. (sijie via ivank) BOOKKEEPER-452: Rename ClientConfiguration multiplexing_enabled to subscription_connection_sharing_enabled (sijie via ivank) BOOKKEEPER-454: hedwig c++ tester script assumes sh is bash (ivank) BOOKKEEPER-470: Possible infinite loop in simple.SubscribeReconnectCallback (sijie via ivank) BOOKKEEPER-55: SubscribeReconnectRetryTask might retry subscription endlessly when another subscription is already successfully created previously (sijie via ivank) BOOKKEEPER-513: TestMessageFilter fails periodically (ivank) hedwig-server: BOOKKEEPER-302: No more messages delivered when hub server scans messages over two ledgers. (sijie via ivank) BOOKKEEPER-330: System.currentTimeMillis usage in Hedwig (uma via sijie) BOOKKEEPER-343: Failed to register hedwig JMX beans in test cases (sijie via ivank) BOOKKEEPER-259: Create a topic manager using versioned write for leader election (sijie via ivank) BOOKKEEPER-191: Hub server should change ledger to write, so consumed messages have chance to be garbage collected. (sijie via ivank) BOOKKEEPER-439: No more messages delivered after deleted consumed ledgers. (sijie via ivank) BOOKKEEPER-440: Make Write/Delete SubscriptionData Restricted to Version (Fangmin Lv via ivank) BOOKKEEPER-482: Precommit is reporting findbugs errors in trunk (ivank via sijie) BOOKKEEPER-442: Failed to deliver messages due to inconsistency between SubscriptionState and LedgerRanges. (jiannan via ivank) BOOKKEEPER-461: Delivery throughput degrades when there are lots of publishers w/ high traffic. (sijie via ivank) BOOKKEEPER-458: Annoy BKReadException error when changing ledger. (jiannan via fpj) BOOKKEEPER-507: Race condition happens if closeSubscription and subscribe happened at the same time (in multiplexed client). (sijie via ivank) BOOKKEEPER-532: AbstractSubscriptionManager#AcquireOp read subscriptions every time even it already owned the topic. (sijie via fpj) BOOKKEEPER-531: Cache thread should wait until old entries are collected (sijie via ivank) BOOKKEEPER-529: stopServingSubscriber in delivery manager should remove stub callbacks in ReadAheadCache (sijie via ivank) BOOKKEEPER-543: Read zk host list in a wrong way in hedwig server (Fangmin via sijie) BOOKKEEPER-540: #stopServingSubscriber when channel is disconnected. (Fangmin via sijie) BOOKKEEPER-539: ClientNotSubscribedException & doesn't receive enough messages in TestThrottlingDelivery#testServerSideThrottle (sijie) BOOKKEEPER-503: The test case of TestThrottlingDelivery#testServerSideThrottle failed sometimes (jiannan & sijie via ivank) IMPROVEMENTS: BOOKKEEPER-467: Allocate ports for testing dynamically (ivank) BOOKKEEPER-471: Add scripts for preCommit testing (ivank) BOOKKEEPER-476: Log to file during tests (ivank via fpj) BOOKKEEPER-491: Hedwig doc for configuration (fpj, sijie via fpj) BOOKKEEPER-495: Revise BK config doc (fpj, ivank via fpj) BOOKKEEPER-523: Every test should have a timeout (ivank, sijie via fpj) BOOKKEEPER-541: Add guava to notice file (ivank via fpj) bookkeeper-server: BOOKKEEPER-328: Bookie DeathWatcher is missing thread name (Rakesh via sijie) BOOKKEEPER-2: bookkeeper does not put enough meta-data in to do recovery properly (ivank via sijie) BOOKKEEPER-317: Exceptions for replication (ivank via sijie) BOOKKEEPER-246: Recording of underreplication of ledger entries (ivank) BOOKKEEPER-247: Detection of under replication (ivank) BOOKKEEPER-299: Provide LedgerFragmentReplicator which should replicate the fragments found from LedgerChecker (umamahesh via ivank) BOOKKEEPER-248: Rereplicating of under replicated data (umamahesh via ivank) BOOKKEEPER-304: Prepare bookie vs ledgers cache and will be used by the Auditor (rakeshr via ivank) BOOKKEEPER-272: Provide automatic mechanism to know bookie failures (rakeshr via ivank) BOOKKEEPER-300: Create Bookie format command (Vinay via sijie) BOOKKEEPER-208: Separate write quorum from ack quorum (ivank) BOOKKEEPER-325: Delay the replication of a ledger if RW found that its last fragment is in underReplication. (umamahesh via ivank) BOOKKEEPER-388: Document bookie format command (kiran_bc via ivank) BOOKKEEPER-278: Ability to disable auto recovery temporarily (rakeshr via ivank) BOOKKEEPER-319: Manage auditing and replication processes (Vinay via ivank) BOOKKEEPER-315: Ledger entries should be replicated sequentially instead of parallel. (umamahesh via ivank) BOOKKEEPER-345: Detect IOExceptions on entrylogger and bookie should consider next ledger dir(if any) (Vinay via ivank) BOOKKEEPER-346: Detect IOExceptions in LedgerCache and bookie should look at next ledger dir(if any) (Vinay via ivank) BOOKKEEPER-444: Refactor pending read op to make speculative reads possible (ivank) BOOKKEEPER-204: Provide a MetaStore interface, and a mock implementation. (Jiannan Wang via ivank) BOOKKEEPER-469: Remove System.out.println from TestLedgerManager (ivank via fpj) BOOKKEEPER-205: implement a MetaStore based ledger manager for bookkeeper client. (jiannan via ivank) BOOKKEEPER-426: Make auditor Vote znode store a protobuf containing the host that voted (ivank) BOOKKEEPER-428: Expose command options in bookie scripts to disable/enable auto recovery temporarily (rakesh,ivank via fpj) BOOKKEEPER-511: BookieShell is very noisy (ivank via sijie) BOOKKEEPER-375: Document about Auto replication service in BK (umamahesh via ivank) BOOKKEEPER-490: add documentation for MetaStore interface (sijie, ivank via sijie) BOOKKEEPER-463: Refactor garbage collection code for ease to plugin different GC algorithm. (Fangmin, ivank, fpj via sijie) BOOKKEEPER-409: Integration Test - Perform bookie rereplication cycle by Auditor-RW processes (rakeshr via ivank) BOOKKEEPER-293: Periodic checking of ledger replication status (ivank) BOOKKEEPER-472: Provide an option to start Autorecovery along with Bookie Servers (umamahesh via ivank) BOOKKEEPER-341: add documentation for bookkeeper ledger manager interface. (sijie via ivank) hedwig-server: BOOKKEEPER-250: Need a ledger manager like interface to manage metadata operations in Hedwig (sijie via ivank) BOOKKEEPER-329: provide stop scripts for hub server (sijie via ivank) BOOKKEEPER-331: Let hedwig support returning message seq id for publish requests. (Mridul via sijie) BOOKKEEPER-340: Test backward compatibility for hedwig between different versions. (sijie via ivank) BOOKKEEPER-283: Improve Hedwig Console to use Hedwig Metadata Manager. (sijie via ivank) BOOKKEEPER-332: Add SubscriptionPreferences to record all preferences for a subscription (sijie via ivank) BOOKKEEPER-333: server-side message filter (sijie via ivank) BOOKKEEPER-252: Hedwig: provide a subscription mode to kill other subscription channel when hedwig client is used as a proxy-style server. (sijie via ivank) BOOKKEEPER-397: Make the hedwig client in RegionManager configurable. (Aniruddha via sijie) BOOKKEEPER-367: Server-Side Message Delivery Flow Control (sijie via ivank) BOOKKEEPER-415: Rename DeliveryThrottle to MessageWindowSize (ivank via sijie) BOOKKEEPER-422: Simplify AbstractSubscriptionManager (stu via fpj) BOOKKEEPER-435: Create SubscriptionChannelManager to manage all subscription channel (sijie via ivank) BOOKKEEPER-411: Add CloseSubscription Request for multiplexing support (sijie via ivank) BOOKKEEPER-441: InMemorySubscriptionManager should back up top2sub2seq before change it (Yixue via ivank) BOOKKEEPER-479: Fix apache-rat issues in tree (ivank via fpj) BOOKKEEPER-457: Create a format command for Hedwig to cleanup its metadata. (sijie via ivank) BOOKKEEPER-487: Add existed hub server settings to configuration template file (sijie via ivank) BOOKKEEPER-389: add documentation for message filter. (sijie via ivank) BOOKKEEPER-399: Let hub server configure write quorum from ack quorum. (sijie via fpj) BOOKKEEPER-342: add documentation for hedwig metadata manager interface. (sijie, ivank via sijie) BOOKKEEPER-522: TestHedwigHub is failing silently on Jenkins (ivank via sijie) BOOKKEEPER-262: Implement a meta store based hedwig metadata manager. (jiannan via ivank) BOOKKEEPER-310: Changes in hedwig server to support JMS spec (ivank via sijie) hedwig-client: BOOKKEEPER-306: Change C++ client to use gtest for testing (ivank via sijie) BOOKKEEPER-334: client-side message filter for java client. (sijie via ivank) BOOKKEEPER-335: client-side message filter for cpp client. (sijie via ivank) BOOKKEEPER-364: re-factor hedwig java client to support both one-subscription-per-channel and multiplex-subscriptions-per-channel. (sijie via ivank) BOOKKEEPER-143: Add SSL support for hedwig cpp client (sijie via ivank) BOOKKEEPER-413: Hedwig C++ client: Rename RUN_AS_SSL_MODE to SSL_ENABLED (ivank via sijie) BOOKKEEPER-369: re-factor hedwig cpp client to provide better interface to support both one-subscription-per-channel and multiple-subscriptions-per-channel. (sijie via ivank) BOOKKEEPER-368: Implementing multiplexing java client. (sijie via ivank) BOOKKEEPER-370: implement multiplexing cpp client. (sijie via ivank) BOOKKEEPER-453: Extract commonality from MultiplexSubscribeResponseHandler and SimpleSubscribeResponseHandler and put into an abstract class (sijie via ivank) BOOKKEEPER-404: Deprecate non-SubscriptionOptions Subscriber Apis (ivank via sijie) Release 4.1.0 - 2012-06-07 Non-backward compatible changes: BUGFIXES: IMPROVEMENTS: Backward compatible changes: BUGFIXES: BOOKKEEPER-145: Put notice and license file for distributed binaries in SVN (ivank) BOOKKEEPER-254: Bump zookeeper version in poms (ivank) BOOKKEEPER-72: Fix warnings issued by FindBugs (ivank) BOOKKEEPER-238: Add log4j.properties in conf/ for bin packages (ivank) bookkeeper-server/ BOOKKEEPER-142: Parsing last log id is wrong, which may make entry log files overwritten (Sijie Gou via ivank) BOOKKEEPER-141: Run extracting ledger id from entry log files in GC thread to speed up bookie restart (Sijie Gou via ivank) BOOKKEEPER-148: Jenkins build is failing (ivank via fpj) BOOKKEEPER-40: BookieClientTest fails intermittantly (fpj via ivank) BOOKKEEPER-150: Entry is lost when recovering a ledger with not enough bookies. (Sijie Guo via ivank) BOOKKEEPER-153: Ledger can't be opened or closed due to zero-length metadata (Sijie Guo via ivank) BOOKKEEPER-23: Timeout requests (ivank) BOOKKEEPER-161: PerChannelBookieClient tries to reuse HashedWheelTimer, throws Exception (ivank) BOOKKEEPER-167: PerChannelBookieClient doesn't use ClientConfiguration (Sijie Guo via ivank) BOOKKEEPER-156: BookieJournalRollingTest failing (Sijie Guo via ivank) BOOKKEEPER-162: LedgerHandle.readLastConfirmed does not work (fpj) BOOKKEEPER-152: Can't recover a ledger whose current ensemble contain failed bookie. (ivank) BOOKKEEPER-171: ServerConfiguration can't use more than one directory for ledgers. (ivank via sijie) BOOKKEEPER-170: Bookie constructor starts a number of threads. (ivank via fpj) BOOKKEEPER-169: bookie hangs on reading header when encountering partial header index file (sijie via ivank) BOOKKEEPER-174: Bookie can't start when replaying entries whose ledger were deleted and garbage collected. (sijie via ivank) BOOKKEEPER-177: Index file is lost or some index pages aren't flushed. (sijie via ivank) BOOKKEEPER-113: NPE In BookKeeper test (fpj via ivank) BOOKKEEPER-176: HierarchicalBookieFailureTest Hung (ivank via fpj) BOOKKEEPER-180: bookie server doesn't quit when running out of disk space (sijie via ivank) BOOKKEEPER-185: Remove bookkeeper-server dependency on hadoop-common (ivank) BOOKKEEPER-184: CompactionTest failing on Jenkins (sijie via ivank) BOOKKEEPER-182: Entry log file is overwritten when fail to read lastLogId. (sijie via ivank) BOOKKEEPER-186: Bookkeeper throttling - permits is not released when read has failed from all replicas (Rakesh R via sijie) BOOKKEEPER-189: AbstractZkLedgerManager doesn't disregard cookies (ivank via sijie) BOOKKEEPER-195: HierarchicalLedgerManager doesn't consider idgen as a "specialNode" (ivank) BOOKKEEPER-190: Add entries would fail when number of open ledgers reaches more than openFileLimit. (sijie via ivank) BOOKKEEPER-194: Get correct latency for addEntry operations for JMX. (sijie via ivank) BOOKKEEPER-166: Bookie will not recover its journal if the length prefix of an entry is truncated (ivank) BOOKKEEPER-193: Ledger is garbage collected by mistake. (sijie, ivank via sijie) BOOKKEEPER-198: replaying entries of deleted ledgers would exhaust ledger cache. (sijie) BOOKKEEPER-112: Bookie Recovery on an open ledger will cause LedgerHandle#close on that ledger to fail (sijie) BOOKKEEPER-135: Fencing does not check the ledger masterPasswd (ivank) BOOKKEEPER-212: Bookie stops responding when creating and deleting many ledgers (sijie via fpj) BOOKKEEPER-211: Bookie fails to to start (sijie) BOOKKEEPER-200: Fix format and comments (fpj) BOOKKEEPER-216: Bookie doesn't exit with right exit code (sijie via ivank) BOOKKEEPER-196: Define interface between bookie and ledger storage (ivank) BOOKKEEPER-213: PerChannelBookieClient calls the wrong errorOut function when encountering an exception (Aniruddha via sijie) BOOKKEEPER-231: ZKUtil.killServer not closing the FileTxnSnapLog from ZK. (Uma Maheswara Rao G via sijie) BOOKKEEPER-232: AsyncBK tests failing (umamaheswararao via ivank) BOOKKEEPER-229: Deleted entry log files would be garbage collected again and again. (sijie via fpj) BOOKKEEPER-242: Bookkeeper not able to connect other zookeeper when shutdown the zookeeper server where the BK has connected. (sijie & rakeshr via ivank) BOOKKEEPER-234: EntryLogger will throw NPE, if any dir does not exist or IO Errors. (umamaheswararao via ivank) BOOKKEEPER-235: Bad syncing in entrylogger degrades performance for many concurrent ledgers (ivank via fpj) BOOKKEEPER-224: Fix findbugs in bookkeeper-server component (ivank) BOOKKEEPER-251: Noise error message printed when scanning entry log files those have been garbage collected. (sijie via ivank) BOOKKEEPER-266: Review versioning documentation (ivank) BOOKKEEPER-258: CompactionTest failed (ivank via sijie) BOOKKEEPER-273: LedgerHandle.deleteLedger() should be idempotent (Matteo Merli via ivank) BOOKKEEPER-281: BKClient is failing when zkclient connection delays (ivank via sijie) BOOKKEEPER-279: LocalBookKeeper is failing intermittently due to zkclient connection establishment delay (Rakesh R via sijie) BOOKKEEPER-286: Compilation warning (ivank via sijie) BOOKKEEPER-287: NoSuchElementException in LedgerCacheImpl (sijie) BOOKKEEPER-288: NOTICE files don't have the correct year (ivank via sijie) hedwig-client/ BOOKKEEPER-217: NPE in hedwig client when enable DEBUG (sijie via ivank) hedwig-server/ BOOKKEEPER-140: Hub server doesn't subscribe remote region correctly when a region is down. (Sijie Gou via ivank) BOOKKEEPER-133: Hub server should update subscription state to zookeeper when losing topic or shutting down (Sijie Gou via ivank) BOOKKEEPER-74: Bookkeeper Persistence Manager should give up topic on error (sijie via ivank) BOOKKEEPER-163: Prevent incorrect NoSuchLedgerException for readLastConfirmed. (ivank via sijie) BOOKKEEPER-197: HedwigConsole uses the same file to load bookkeeper client config and hub server config (sijie) BOOKKEEPER-56: Race condition of message handler in connection recovery in Hedwig client (sijie & Gavin Li via ivank) BOOKKEEPER-215: Deadlock occurs under high load (sijie via ivank) BOOKKEEPER-245: Intermittent failures in PersistanceManager tests (ivank) BOOKKEEPER-209: Typo in ServerConfiguration for READAHEAD_ENABLED (ivank) BOOKKEEPER-146: TestConcurrentTopicAcquisition sometimes hangs (ivank) BOOKKEEPER-285: TestZkSubscriptionManager quits due to NPE, so other tests are not run in hedwig server. (sijie) bookkeeper-benchmark/ BOOKKEEPER-207: BenchBookie doesn't run correctly (ivank via fpj) BOOKKEEPER-228: Fix the bugs in BK benchmark (umamaheswararao via ivank) IMPROVEMENTS: BOOKKEEPER-265: Review JMX documentation (sijie via fpj) bookkeeper-server/ BOOKKEEPER-95: extends zookeeper JMX to monitor and manage bookie server (Sijie Guo via ivank) BOOKKEEPER-98: collect add/read statistics on bookie server (Sijie Guo via ivank) BOOKKEEPER-157: For small packets, increasing number of bookies actually degrades performance. (ivank via fpj) BOOKKEEPER-165: Add versioning support for journal files (ivank) BOOKKEEPER-137: Do not create Ledger index files until absolutely necessary. (ivank) BOOKKEEPER-172: Upgrade framework for filesystem layouts (ivank via fpj) BOOKKEEPER-178: Delay ledger directory creation until the ledger index file was created (sijie via ivank) BOOKKEEPER-160: bookie server needs to do compaction over entry log files to reclaim disk space (sijie via ivank) BOOKKEEPER-187: Create well defined interface for LedgerCache (ivank) BOOKKEEPER-175: Bookie code is very coupled (ivank) BOOKKEEPER-188: Garbage collection code is in the wrong place (ivank via sijie) BOOKKEEPER-218: Provide journal manager to manage journal related operations (sijie) BOOKKEEPER-173: Uncontrolled number of threads in bookkeeper (sijie via fpj) BOOKKEEPER-241: Add documentation for bookie entry log compaction (sijie via fpj) BOOKKEEPER-263: ZK ledgers root path is hard coded (Aniruddha via sijie) BOOKKEEPER-260: Define constant for -1 (invalid entry id) (ivank via fpj) BOOKKEEPER-270: Review documentation on bookie cookie (ivank via fpj) hedwig-server/ BOOKKEEPER-77: Add a console client for hedwig (Sijie Guo via ivank) BOOKKEEPER-168: Message bounding on subscriptions (ivank) BOOKKEEPER-96: extends zookeeper JMX to monitor and manage hedwig server (sijie via ivank) BOOKKEEPER-97: collect pub/sub/consume statistics on hub server (sijie via ivank) BOOKKEEPER-269: Review documentation for hedwig console client (sijie via fpj) hedwig-client/ BOOKKEEPER-271: Review documentation for message bounding (ivank via fpj) bookkeeper-benchmark/ BOOKKEEPER-158: Move latest benchmarking code into trunk (ivank via fpj) BOOKKEEPER-236: Benchmarking improvements from latest round of benchmarking (ivank via fpj) Release 4.0.0 - 2011-11-30 Non-backward compatible changes: BUGFIXES: BOOKKEEPER-89: Bookkeeper API changes for initial Bookkeeper release (ivank) BOOKKEEPER-108: add configuration support for BK (Sijie via ivank) BOOKKEEPER-90: Hedwig API changes for initial Bookkeeper release (ivank via fpj) Backward compatible changes: BUGFIXES: BOOKKEEPER-124: build has RAT failures (ivank) BOOKKEEPER-121: Review Hedwig client documentation (breed via ivank) BOOKKEEPER-127: Make poms use official zookeeper 3.4.0 (ivank) BOOKKEEPER-120: Review BookKeeper client documentation (ivank) BOOKKEEPER-122: Review BookKeeper server documentation (fpj & ivank) BOOKKEEPER-66: use IPv4 for builds (mmorel via ivank) BOOKKEEPER-132: Sign artifacts before deploying to maven (ivank) BOOKKEEPER-131: Fix zookeeper test dependency (ivank) BOOKKEEPER-134: Delete superfluous lib directories (ivank) BOOKKEEPER-138: NOTICE.txt is invalid (ivank) BOOKKEEPER-139: Binary packages do not carry NOTICE.txt (ivank) bookkeeper-server/ BOOKKEEPER-1: Static variable makes tests fail (fpj via ivank) BOOKKEEPER-19: BookKeeper doesn't support more than 2Gig of memory (ivan via fpj) BOOKEEPER-22: Exception in LedgerCache causes addEntry request to fail (fpj via fpj) BOOKEEPER-5: Issue with Netty in BookKeeper (fpj and ivank via fpj) BOOKKEEPER-30: Test are too noisy (ivank via fpj) BOOKKEEPER-11: Read from open ledger (fpj via ivank) BOOKKEEPER-27: mvn site failed with unresolved dependencies (ivank via fpj) BOOKKEEPER-29: BookieRecoveryTest fails intermittently (fpj via ivank) BOOKKEEPER-33: Add length and offset parameter to addEntry (ivank via fpj) BOOKKEEPER-29: BookieRecoveryTest fails intermittently (ivank, fpj via fpj) BOOKKEEPER-38: Bookie Server doesn't exit when its zookeeper session is expired. So the process is hang there. (Sijie Guo via breed) BOOKKEEPER-58: Changes introduced in BK-38 cause BookieClientTest to hang indefinitely. (ivank) BOOKKEEPER-18: maven build is unstable (mmorel, ivank via ivank) BOOKKEEPER-57: NullPointException at bookie.zk@EntryLogger (xulei via ivank) BOOKKEEPER-59: Race condition in netty code allocates and orphans resources (BK-5 revisited) (ivank via fpj) BOOKKEEPER-68: Conditional setData (fpj via ivank) BOOKKEEPER-86: bookkeeper-benchmark fails to compile after BOOKKEEPER-68 (ivank via breed) BOOKKEEPER-61: BufferedChannel read endless when the remaining bytes of file is less than the capacity of read buffer (Sijie Guo via breed) BOOKKEEPER-84: Add versioning for ZK metadata (ivank via breed) BOOKKEEPER-92: using wrong context object in readLastConfirmedComplete callback (Sijie Guo via ivank) BOOKKEEPER-94: Double callbacks in readLastConfirmedOp which fails readLastConfirmed operation even received enough valid responses. (Sijie Guo via ivank) BOOKKEEPER-83: Added versioning and flags to the bookie protocol (ivank) BOOKKEEPER-93: bookkeeper doesn't work correctly on OpenLedgerNoRecovery (Sijie Guo via ivank) BOOKKEEPER-103: ledgerId and entryId is parsed wrong when addEntry (Sijie Guo via ivank) BOOKKEEPER-50: NullPointException at LedgerDescriptor#cmpMasterKey (Sijie Guo via ivank) BOOKKEEPER-82: support journal rolling (Sijie Guo via fpj) BOOKKEEPER-106: recoveryBookieData can select a recovery bookie which is already in the ledgers ensemble (ivank via fpj) BOOKKEEPER-101: Add Fencing to Bookkeeper (ivank) BOOKKEEPER-104: Add versioning between bookie and its filesystem layout (ivank) BOOKKEEPER-81: disk space of garbage collected entry logger files isn't reclaimed util process quit (Sijie Guo via fpj) BOOKKEEPER-91: Bookkeeper and hedwig clients should not use log4j directly (ivank via fpj) BOOKKEEPER-115: LocalBookKeeper fails after BOOKKEEPER-108 (ivank) BOOKKEEPER-114: add a shutdown hook to shut down bookie server safely. (Sijie via ivank) BOOKKEEPER-39: Bookie server failed to restart because of too many ledgers (more than ~50,000 ledgers) (Sijie via ivank) BOOKKEEPER-125: log4j still used in some places (ivank) BOOKKEEPER-62: Bookie can not start when encountering corrupted records (breed via ivank) BOOKKEEPER-111: Document bookie recovery feature (ivank) BOOKKEEPER-129: ZK_TIMEOUT typo in client/server configuration (Sijie via ivank) BOOKKEEPER-22: Exception in LedgerCache causes addEntry request to fail (fpj via fpj) BOOKKEEPER-5: Issue with Netty in BookKeeper (fpj and ivank via fpj) hedwig-server/ BOOKKEEPER-43: NullPointException when releasing topic (Sijie Guo via breed) BOOKKEEPER-51: NullPointException at FIFODeliveryManager#deliveryPtrs (xulei via ivank) BOOKKEEPER-63: Hedwig PubSubServer must wait for its Zookeeper client to be connected upon startup (mmorel via ivank) BOOKKEEPER-100: Some hedwig tests have build errors (dferro via ivank) BOOKKEEPER-69: ServerRedirectLoopException when a machine (hosts bookie server & hub server) reboot, which is caused by race condition of topic manager (Sijie, ivank via ivank) hedwig-client/ BOOKKEEPER-52: Message sequence confuse due to the subscribeMsgQueue@SubscribeResponseHandler (xulei via ivank) BOOKKEEPER-88: derby doesn't like - in the topic names (breed via ivank) BOOKKEEPER-71: hedwig c++ client does not build . (ivank) BOOKKEEPER-107: memory leak in HostAddress of hedwig c++ client (Sijie Guo via ivank) BOOKKEEPER-80: subscription msg queue race condition in hedwig c++ client (Sijie Guo via ivank) BOOKKEEPER-87: TestHedwigHub exhausts direct buffer memory with netty 3.2.4.Final (ivank via fpj) BOOKKEEPER-79: randomly startDelivery/stopDelivery will core dump in c++ hedwig client (Sijie Guo via ivank) BOOKKEEPER-118: Hedwig client doesn't kill and remove old subscription channel after redirection. (Sijie Guo via ivank) BOOKKEEPER-117: Support multi threads in hedwig cpp client to leverage multi-core hardware (Sijie Guo via ivank) BOOKKEEPER-53: race condition of outstandingMsgSet@SubscribeResponseHandler (fpj via breed) IMPROVEMENTS: BOOKKEEPER-28: Create useful startup scripts for bookkeeper and hedwig (ivank) BOOKKEEPER-26: Indentation is all messed up in the BookKeeper code (ivank via fpj) BOOKKEEPER-41: Generation of packages for distribution (ivank via fpj) BOOKKEEPER-65: fix dependencies on incompatible versions of netty (mmorel via ivank) BOOKKEEPER-102: Make bookkeeper use ZK from temporary repo (ivank) BOOKKEEPER-128: pom and script modifications required for generating release packages (ivank) hedwig-client/ BOOKKEEPER-44: Reuse publish channel to default server to avoid too many connect requests to default server when lots of producers came in same time (Sijie Guo via breed) BOOKKEEPER-109: Add documentation to describe how bookies flushes data (Sijie Guo via fpj) BOOKKEEPER-119: Keys in configuration have inconsistent style (ivank via fpj) bookkeeper-release-4.2.4/LICENSE000066400000000000000000000261361244507361200163060ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. bookkeeper-release-4.2.4/NOTICE000066400000000000000000000002541244507361200161760ustar00rootroot00000000000000Apache BookKeeper Copyright 2011-2014 The Apache Software Foundation This product includes software developed at The Apache Software Foundation (http://www.apache.org/). bookkeeper-release-4.2.4/README000066400000000000000000000064541244507361200161620ustar00rootroot00000000000000Build instructions for BookKeeper ------------------------------------------------------------------------------- Requirements: * Unix System * JDK 1.6 * Maven 3.0 * Autotools (if compiling native hedwig client) * Internet connection for first build (to fetch all dependencies) ------------------------------------------------------------------------------- The BookKeeper project contains: - bookkeeper-server (BookKeeper server and client) - bookkeeper-benchmark (Benchmark suite for testing BookKeeper performance) - hedwig-protocol (Hedwig network protocol) - hedwig-client (Hedwig client library) - hedwig-server (Hedwig server) BookKeeper is a system to reliably log streams of records. It is designed to store write ahead logs, such as those found in database or database like applications. Hedwig is a publish-subscribe system designed to carry large amounts of data across the internet in a guaranteed-delivery fashion from those who produce it (publishers) to those who are interested in it (subscribers). -------------------------------------------------------------------------------- How do I build? BookKeeper uses maven as its build system. To build, run "mvn package" from the top-level directory, or from within any of the submodules. Useful maven commands are: * Clean : mvn clean * Compile : mvn compile * Run tests : mvn test * Create JAR : mvn package * Run findbugs : mvn compile findbugs:findbugs * Install JAR in M2 cache : mvn install * Deploy JAR to Maven repo : mvn deploy * Run Rat : mvn apache-rat:check * Build javadocs : mvn compile javadoc:aggregate * Build distribution : mvn package assembly:single Tests options: * Use -DskipTests to skip tests when running the following Maven goals: 'package', 'install', 'deploy' or 'verify' * -Dtest=,,.... * -Dtest.exclude= * -Dtest.exclude.pattern=**/.java,**/.java -------------------------------------------------------------------------------- How do I run the services? Running a Hedwig service requires a running BookKeeper service, which in turn requires a running ZooKeeper service (see http://zookeeper.apache.org). To start a bookkeeper service quickly for testing, run: $ bookkeeper-server/bin/bookkeeper localbookie 10 This will start a standalone, ZooKeeper instance and 10 BookKeeper bookies. Note that this is only useful for testing. Data is not persisted between runs. To start a real BookKeeper service, you must set up a ZooKeeper instance and run start a bookie on several machines. Modify bookkeeper-server/conf/bk_server.conf to point to your ZooKeeper instance. To start a bookie run: $ bookkeeper-server/bin/bookkeeper bookie Once you have at least 3 bookies runnings, you can start some Hedwig hubs. A hub is a machines which is responsible for a set of topics in the pubsub system. The service automatically distributes the topics among the hubs. To start a hedwig hub: $ hedwig-server/bin/hedwig server You can get more help on using these commands by running: $ bookkeeper-server/bin/bookkeeper help and $ hedwig-server/bin/hedwig help bookkeeper-release-4.2.4/bin/000077500000000000000000000000001244507361200160415ustar00rootroot00000000000000bookkeeper-release-4.2.4/bin/find-new-patch-available-jiras000077500000000000000000000105101244507361200236140ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TEMPDIR=${BASEDIR}/tmp JIRAAVAILPATCHQUERY="https://issues.apache.org/jira/sr/jira.issueviews:searchrequest-xml/temp/SearchRequest.xml?jqlQuery=project+in+%28BOOKKEEPER%29+AND+status+%3D+%22Patch+Available%22+ORDER+BY+updated+DESC&tempMax=1000" TESTPATCHJOBURL="https://builds.apache.org/job/bookkeeper-trunk-precommit-build" TOKEN="" SUBMIT="false" DELETEHISTORYFILE="false" RUNTESTSFILE=${BASEDIR}/TESTED_PATCHES.txt printUsage() { echo "Usage: $0 " echo " --submit --token=" echo " [--delete-history-file]" echo " [--script-debug]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --submit) SUBMIT="true" ;; --token=*) TOKEN=${i#*=} ;; --script-debug) DEBUG="-x" ;; --delete-history-file) DELETEHISTORYFILE="true" ;; *) echo "Invalid option" echo printUsage exit 1 ;; esac done if [[ "$SUBMIT" == "true" && "${TOKEN}" == "" ]] ; then echo "Token has not been specified" echo printUsage exit 1 fi } ############################################################################### findAndSubmitAvailablePatches() { ## Grab all the key (issue numbers) and largest attachment id for each item in the XML curl --fail --location --retry 3 "${JIRAAVAILPATCHQUERY}" > ${TEMPDIR}/patch-availables.xml if [ "$?" != "0" ] ; then echo "Could not retrieve available patches from JIRA" exit 1 fi xpath -e "//item/key/text() | //item/attachments/attachment[not(../attachment/@id > @id)]/@id" \ ${TEMPDIR}/patch-availables.xml > ${TEMPDIR}/patch-attachments.element ### Replace newlines with nothing, then replace id=" with =, then replace " with newline ### to yield lines with pairs (issueNumber,largestAttachmentId). Example: BOOKKEEPER-123,456984 cat ${TEMPDIR}/patch-attachments.element \ | awk '{ if ( $1 ~ /^BOOKKEEPER\-/) {JIRA=$1 }; if ($1 ~ /id=/) { print JIRA","$1} }' \ | sed 's/id\="//' | sed 's/"//' > ${TEMPDIR}/patch-availables.pair ### Iterate through issue list and find the (issueNumber,largestAttachmentId) pairs that have ### not been tested (ie don't already exist in the patch_tested.txt file touch ${RUNTESTSFILE} cat ${TEMPDIR}/patch-availables.pair | while read PAIR ; do set +e COUNT=`grep -c "$PAIR" ${RUNTESTSFILE}` set -e if [ "$COUNT" -lt "1" ] ; then ### Parse $PAIR into project, issue number, and attachment id ISSUE=`echo $PAIR | sed -e "s/,.*$//"` echo "Found new patch for issue $ISSUE" if [ "$SUBMIT" == "true" ]; then ### Kick off job echo "Submitting job for issue $ISSUE" curl --fail --location --retry 3 \ "${TESTPATCHJOBURL}/buildWithParameters?token=${TOKEN}&JIRA_NUMBER=${ISSUE}" > /dev/null if [ "$?" != "0" ] ; then echo "Could not submit precommit job for $ISSUE" exit 1 fi fi ### Mark this pair as tested by appending to file echo "$PAIR" >> ${RUNTESTSFILE} fi done } ############################################################################### mkdir -p ${TEMPDIR} 2>&1 $STDOUT parseArgs "$@" if [ -n "${DEBUG}" ] ; then set -x fi if [ "${DELETEHISTORYFILE}" == "true" ] ; then rm ${RUNTESTSFILE} fi findAndSubmitAvailablePatches exit 0 bookkeeper-release-4.2.4/bin/raw-check-patch000066400000000000000000000024061244507361200207270ustar00rootroot00000000000000#!/usr/bin/env bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. printTrailingSpaces() { PATCH=$1 cat $PATCH | awk '/^+/ { if (/ $/) { print "\tL" NR ":" $0} }' } printTabs() { PATCH=$1 cat $PATCH | awk '/^+/ { if (/\t/) { print "\tL" NR ":" $0 } }' } printAuthors() { PATCH=$1 cat $PATCH | awk '/^+/ { L=tolower($0); if (L ~ /.*\*.* @author/) { print "\tL" NR ":" $0 } }' } printLongLines() { PATCH=$1 cat $PATCH | awk '/^+/ { if ( length > 121 ) { print "\tL" NR ":" $0 } }' } if [[ "X$(basename -- "$0")" = "Xraw-check-patch" ]]; then echo Trailing spaces printTrailingSpaces $1 echo echo Tabs printTabs $1 echo echo Authors printAuthors $1 echo echo Long lines printLongLines $1 fi bookkeeper-release-4.2.4/bin/test-patch000077500000000000000000000312041244507361200200430ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TESTPATCHDIRNAME=test-patch TESTPATCHDIR=${BASEDIR}/${TESTPATCHDIRNAME} TOOLSDIR=${TESTPATCHDIR}/tools TEMPDIR=${TESTPATCHDIR}/tmp REPORTDIR=${TESTPATCHDIR}/reports SUMMARYFILE=${REPORTDIR}/TEST-SUMMARY.jira SUMMARYFILETXT=${REPORTDIR}/TEST-SUMMARY.txt JIRAHOST="https://issues.apache.org" JIRAURL="${JIRAHOST}/jira" JIRAURLISSUEPREFIX="${JIRAURL}/browse/" JIRAUPDATE="false" JIRAUSER="" JIRAPASSWORD="" VERBOSEOPTION="" JIRAISSUE="" PATCHFILE="" TASKSTORUN="" TASKSTOSKIP="" RESETSCM="false" DIRTYSCM="false" STDOUT="/dev/null" MVNPASSTHRU="" ############################################################################### gitOrSvn() { SCM="NONE" which git &> /dev/null if [[ $? == 0 ]] ; then git status &> /dev/null if [[ $? == 0 ]] ; then SCM="git" fi fi if [ "${SCM}" == "NONE" ] ; then which svn &> /dev/null if [[ $? == 0 ]] ; then svnOutput=`svn status 2>&1` if [[ "$svnOutput" != *"is not a working copy" ]] ; then SCM="svn" fi fi fi if [ "${SCM}" == "NONE" ] ; then echo "The current workspace is not under Source Control (GIT or SVN)" exit 1 fi } ############################################################################### prepareSCM() { gitOrSvn if [ "${DIRTYSCM}" != "true" ] ; then if [ "${RESETSCM}" == "true" ] ; then if [ "${SCM}" == "git" ] ; then git reset --hard HEAD > /dev/null git clean -f -d -e $TESTPATCHDIRNAME > /dev/null fi if [ "${SCM}" == "svn" ] ; then svn revert -R . > /dev/null svn status | grep "\?" | awk '{print $2}' | xargs rm -rf fi else echo "It should not happen DIRTYSCM=false & RESETSCM=false" exit 1 fi echo "Cleaning local ${SCM} workspace" >> ${SUMMARYFILE} else echo "WARNING: Running test-patch on a dirty local ${SCM} workspace" >> ${SUMMARYFILE} fi } ############################################################################### prepareTestPatchDirs() { mkdir -p ${TESTPATCHDIR} 2> /dev/null rm -rf ${REPORTDIR} 2> /dev/null rm -rf ${TEMPDIR} 2> /dev/null mkdir -p ${TOOLSDIR} 2> /dev/null mkdir -p ${TEMPDIR} 2> /dev/null mkdir -p ${REPORTDIR} 2> /dev/null if [ ! -e "${TESTPATCHDIR}" ] ; then echo "Could not create test-patch/ dir" exit 1 fi } ############################################################################### updateJira() { if [[ "${JIRAUPDATE}" != "" && "${JIRAISSUE}" != "" ]] ; then if [[ "$JIRAPASSWORD" != "" ]] ; then JIRACLI=${TOOLSDIR}/jira-cli/jira.sh if [ ! -e "${JIRACLI}" ] ; then curl https://bobswift.atlassian.net/wiki/download/attachments/16285777/jira-cli-2.6.0-distribution.zip > ${TEMPDIR}/jira-cli.zip if [ $? != 0 ] ; then echo echo "Could not download jira-cli tool, thus no JIRA updating" echo exit 1 fi mkdir ${TEMPDIR}/jira-cli-tmp (cd ${TEMPDIR}/jira-cli-tmp;jar xf ${TEMPDIR}/jira-cli.zip; mv jira-cli-2.6.0 ${TOOLSDIR}/jira-cli) chmod u+x ${JIRACLI} fi echo "Adding comment to JIRA" comment=`cat ${SUMMARYFILE}` $JIRACLI -s $JIRAURL -a addcomment -u $JIRAUSER -p "$JIRAPASSWORD" --comment "$comment" --issue $JIRAISSUE echo else echo "Skipping JIRA update" echo fi fi } ############################################################################### cleanupAndExit() { updateJira exit $1 } ############################################################################### printUsage() { echo "Usage: $0 " echo " (--jira= | --patch=)" echo " (--reset-scm | --dirty-scm)" echo " [--tasks=]" echo " [--skip-tasks=]" echo " [--jira-cli=]" echo " [--jira-user=]" echo " [--jira-password=]" echo " [-D...]" echo " [-P...]" echo " [--list-tasks]" echo " [--verbose]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --jira=*) JIRAISSUE=${i#*=} ;; --patch=*) PATCHFILE=${i#*=} ;; --tasks=*) TASKSTORUN=${i#*=} ;; --skip-tasks=*) TASKSTOSKIP=${i#*=} ;; --list-tasks) listTasks cleanupAndExit 0 ;; --jira-cli=*) JIRACLI=${i#*=} ;; --jira-user=*) JIRAUSER=${i#*=} ;; --jira-password=*) JIRAPASSWORD=${i#*=} JIRAUPDATE="true" ;; -D*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; -P*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; --reset-scm) RESETSCM="true" ;; --dirty-scm) DIRTYSCM="true" ;; --verbose) VERBOSEOPTION="--verbose" STDOUT="/dev/stdout" ;; *) echo "Invalid option" echo printUsage exit 1 ;; esac done if [[ "${JIRAISSUE}" == "" && "${PATCHFILE}" == "" ]] ; then echo "Either --jira or --patch option must be specified" echo printUsage exit 1 fi if [[ "${JIRAISSUE}" != "" && "${PATCHFILE}" != "" ]] ; then echo "Cannot specify --jira or --patch options together" echo printUsage exit 1 fi if [[ "${RESETSCM}" == "false" && "${DIRTYSCM}" == "false" ]] ; then echo "Either --reset-scm or --dirty-scm option must be specified" echo printUsage exit 1 fi if [[ "${RESETSCM}" == "true" && "${DIRTYSCM}" == "true" ]] ; then echo "Cannot specify --reset-scm and --dirty-scm options together" echo printUsage exit 1 fi } ############################################################################### listTasks() { echo "Available Tasks:" echo "" getAllTasks for taskFile in ${TASKFILES} ; do taskName=`bash $taskFile --taskname` echo " $taskName" done echo } ############################################################################### downloadPatch () { PATCHFILE=${TEMPDIR}/test.patch jiraPage=${TEMPDIR}/jira.txt curl "${JIRAURLISSUEPREFIX}${JIRAISSUE}" > ${jiraPage} if [[ `grep -c 'Patch Available' ${jiraPage}` == 0 ]] ; then echo "$JIRAISSUE is not \"Patch Available\". Exiting." echo exit 1 fi relativePatchURL=`grep -o '"/jira/secure/attachment/[0-9]*/[^"]*' ${jiraPage} \ | grep -v -e 'htm[l]*$' | sort | tail -1 \ | grep -o '/jira/secure/attachment/[0-9]*/[^"]*'` patchURL="${JIRAHOST}${relativePatchURL}" patchNum=`echo $patchURL | grep -o '[0-9]*/' | grep -o '[0-9]*'` curl ${patchURL} > ${PATCHFILE} if [[ $? != 0 ]] ; then echo "Could not download patch for ${JIRAISSUE} from ${patchURL}" echo cleanupAndExit 1 fi PATCHNAME=$(echo $patchURL | sed 's/.*\///g') echo "JIRA ${JIRAISSUE}, patch downloaded at `date` from ${patchURL}" echo echo "Patch [$PATCHNAME|$patchURL] downloaded at $(date)" >> ${SUMMARYFILE} echo "" >> ${SUMMARYFILE} } ############################################################################### applyPatch() { echo "Applying patch" >> $STDOUT echo "" >> $STDOUT patch -f -E --dry-run -p0 < ${PATCHFILE} | tee ${REPORTDIR}/APPLY-PATCH.txt \ >> $STDOUT if [[ ${PIPESTATUS[0]} != 0 ]] ; then echo "Patch failed to apply to head of branch" echo "{color:red}-1{color} Patch failed to apply to head of branch" >> ${SUMMARYFILE} echo "" >> ${SUMMARYFILE} echo "----------------------------" >> ${SUMMARYFILE} echo cleanupAndExit 1 fi patch -f -E -p0 < ${PATCHFILE} > ${REPORTDIR}/APPLY-PATCH.txt if [[ $? != 0 ]] ; then echo "ODD!, dry run passed, but patch failed to apply to head of branch" echo cleanupAndExit 1 fi echo "" >> $STDOUT echo "Patch applied" echo "{color:green}+1 PATCH_APPLIES{color}" >> $SUMMARYFILE echo } ############################################################################### run() { task=`bash $1 --taskname` if [[ "${TASKSTORUN}" == "" || "${TASKSTORUN}" =~ "${task}" ]] ; then if [[ ! "${TASKSTOSKIP}" =~ "${task}" ]] ; then echo " Running test-patch task ${task}" outputFile="`basename $1`-$2.out" $1 --op=$2 --tempdir=${TEMPDIR} --reportdir=${REPORTDIR} \ --summaryfile=${SUMMARYFILE} --patchfile=${PATCHFILE} ${MVNPASSTHRU} \ ${VERBOSEOPTION} | tee ${TEMPDIR}/${outputFile} >> $STDOUT if [[ $? != 0 ]] ; then echo " Failure, check for details ${TEMPDIR}/${outputFile}" echo cleanupAndExit 1 fi fi fi } ############################################################################### getAllTasks() { TASKFILES=`ls -a bin/test\-patch\-[0-9][0-9]\-*` } ############################################################################### prePatchRun() { echo "Pre patch" for taskFile in ${TASKFILES} ; do run $taskFile pre done echo } ############################################################################### postPatchRun() { echo "Post patch" for taskFile in ${TASKFILES} ; do run $taskFile post done echo } ############################################################################### createReports() { echo "Reports" for taskFile in ${TASKFILES} ; do run $taskFile report done echo } ############################################################################### echo parseArgs "$@" prepareSCM prepareTestPatchDirs echo "" > ${SUMMARYFILE} if [ "${PATCHFILE}" == "" ] ; then echo "Testing JIRA ${JIRAISSUE}" echo echo "Testing JIRA ${JIRAISSUE}" >> ${SUMMARYFILE} echo "" >> ${SUMMARYFILE} else if [ ! -e ${PATCHFILE} ] ; then echo "Patch file does not exist" cleanupAndExit 1 fi echo "Testing patch ${PATCHFILE}" echo echo "Testing patch ${PATCHFILE}" >> ${SUMMARYFILE} echo "" >> ${SUMMARYFILE} fi echo "" >> ${SUMMARYFILE} if [ "${PATCHFILE}" == "" ] ; then downloadPatch ${JIRAISSUE} fi echo "----------------------------" >> ${SUMMARYFILE} echo "" >> ${SUMMARYFILE} getAllTasks prePatchRun applyPatch postPatchRun createReports echo "" >> ${SUMMARYFILE} echo "----------------------------" >> ${SUMMARYFILE} MINUSONES=`grep -c "\}\-1" ${SUMMARYFILE}` if [[ $MINUSONES == 0 ]]; then echo "{color:green}*+1 Overall result, good!, no -1s*{color}" >> ${SUMMARYFILE} else echo "{color:red}*-1 Overall result, please check the reported -1(s)*{color}" >> ${SUMMARYFILE} fi echo "" >> ${SUMMARYFILE} WARNINGS=`grep -c "\}WARNING" ${SUMMARYFILE}` if [[ $WARNINGS != 0 ]]; then echo "{color:red}. There is at least one warning, please check{color}" >> ${SUMMARYFILE} fi echo "" >> ${SUMMARYFILE} if [ ! -z "${JIRAISSUE}" ]; then echo "The full output of the test-patch run is available at" >> ${SUMMARYFILE} echo "" >> ${SUMMARYFILE} echo ". ${BUILD_URL}" >> ${SUMMARYFILE} echo "" >> ${SUMMARYFILE} else echo echo "Refer to ${REPORTDIR} for detailed test-patch reports" echo fi cat ${SUMMARYFILE} | sed -e 's/{color}//' -e 's/{color:green}//' -e 's/{color:red}//' -e 's/^\.//' -e 's/^\*//' -e 's/\*$//' > ${SUMMARYFILETXT} cat ${SUMMARYFILETXT} cleanupAndExit `expr $MINUSONES != 0` bookkeeper-release-4.2.4/bin/test-patch-00-clean000077500000000000000000000047061244507361200213470ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TASKNAME="CLEAN" OP="" TEMPDIR="" REPORTDIR="" SUMMARYFILE="" MVNPASSTHRU="" ############################################################################### cleanupAndExit() { exit $1 } ############################################################################### printUsage() { echo "Usage: $0 --taskname | (--op=pre|post|report --tempdir=) [-D...] [-P...]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --taskname) echo ${TASKNAME} exit 0 ;; --op=*) OP=${i#*=} ;; --tempdir=*) TEMPDIR=${i#*=} ;; --reportdir=*) REPORTDIR=${i#*=} ;; --summaryfile=*) SUMMARYFILE=${i#*=} ;; -D*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; -P*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; esac done if [[ "${OP}" == "" || "${TEMPDIR}" == "" ]] ; then echo "Missing options" echo printUsage cleanupAndExit 1 fi if [[ "${OP}" != "pre" && "${OP}" != "post" && "${OP}" != "report" ]] ; then echo "Invalid operation" echo printUsage cleanupAndExit 1 fi } ############################################################################### parseArgs "$@" case $OP in pre) mvn clean ${MVNPASSTHRU} > ${TEMPDIR}/${TASKNAME}.txt EXITCODE=$? # removing files created by dependency:copy-dependencies rm -f */lib/* exit $EXITCODE ;; post) mvn clean ${MVNPASSTHRU} >> ${TEMPDIR}/${TASKNAME}.txt EXITCODE=$? ;; report) echo "{color:green}+1 ${TASKNAME}{color}" >> $SUMMARYFILE ;; esac exit 0 bookkeeper-release-4.2.4/bin/test-patch-05-patch-raw-analysis000077500000000000000000000123621244507361200237760ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. source $(dirname "$0")/raw-check-patch if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TASKNAME="RAW_PATCH_ANALYSIS" OP="" TEMPDIR="" REPORTDIR="" SUMMARYFILE="" PATCHFILE="" ############################################################################### cleanupAndExit() { exit $1 } ############################################################################### printUsage() { echo "Usage: $0 --taskname | (--op=pre|post|report --tempdir= --reportdir= --summaryfile=)" echo } ############################################################################### parseArgs() { for i in $* do case $i in --taskname) echo ${TASKNAME} exit 0 ;; --op=*) OP=${i#*=} ;; --tempdir=*) TEMPDIR=${i#*=} ;; --reportdir=*) REPORTDIR=${i#*=} ;; --summaryfile=*) SUMMARYFILE=${i#*=} ;; --patchfile=*) PATCHFILE=${i#*=} ;; esac done if [[ "${TASKNAME}" == "" || "${OP}" == "" || "${TEMPDIR}" == "" || "${REPORTDIR}" == "" || "${SUMMARYFILE}" == "" || "${PATCHFILE}" == "" ]] ; then echo "Missing options" echo printUsage cleanupAndExit 1 fi if [[ "${OP}" != "pre" && "${OP}" != "post" && "${OP}" != "report" ]] ; then echo "Invalid operation" echo printUsage cleanupAndExit 1 fi } ############################################################################### checkNoAuthors() { TMPFILE=$TEMPDIR/$TASKNAME-authors.txt printAuthors $PATCHFILE > $TMPFILE authorTags=$(wc -l $TMPFILE | awk '{print $1}') if [[ ${authorTags} != 0 ]] ; then REPORT+=("{color:red}-1{color} the patch seems to contain ${authorTags} line(s) with @author tags") REPORT+=("$(cat $TMPFILE)") else REPORT+=("{color:green}+1{color} the patch does not introduce any @author tags") fi } ############################################################################### checkNoTabs() { TMPFILE=$TEMPDIR/$TASKNAME-tabs.txt printTabs $PATCHFILE > $TMPFILE tabs=$(wc -l $TMPFILE | awk '{print $1}') if [[ ${tabs} != 0 ]] ; then REPORT+=("{color:red}-1{color} the patch contains ${tabs} line(s) with tabs") REPORT+=("$(cat $TMPFILE)") else REPORT+=("{color:green}+1{color} the patch does not introduce any tabs") fi } ############################################################################### checkNoTrailingSpaces() { TMPFILE=$TEMPDIR/$TASKNAME-trailingspaces.txt printTrailingSpaces $PATCHFILE > $TMPFILE trailingSpaces=$(wc -l $TMPFILE | awk '{print $1}') if [[ ${trailingSpaces} != 0 ]] ; then REPORT+=("{color:red}-1{color} the patch contains ${trailingSpaces} line(s) with trailing spaces") REPORT+=("$(cat $TMPFILE)") else REPORT+=("{color:green}+1{color} the patch does not introduce any trailing spaces") fi } ############################################################################### checkLinesLength() { TMPFILE=$TEMPDIR/$TASKNAME-trailingspaces.txt printLongLines $PATCHFILE > $TMPFILE longLines=$(wc -l $TMPFILE | awk '{print $1}') if [[ ${longLines} != 0 ]] ; then REPORT+=("{color:red}-1{color} the patch contains ${longLines} line(s) longer than 120 characters") REPORT+=("$(cat $TMPFILE)") else REPORT+=("{color:green}+1{color} the patch does not introduce any line longer than 120") fi } ############################################################################### checkForTestcases() { testcases=`grep -c -i -e '^+++.*/test' ${PATCHFILE}` if [[ ${testcases} == 0 ]] ; then REPORT+=("{color:red}-1{color} the patch does not add/modify any testcase") #reverting for summary +1 calculation testcases=1 else REPORT+=("{color:green}+1{color} the patch does adds/modifies ${testcases} testcase(s)") #reverting for summary +1 calculation testcases=0 fi } ############################################################################### parseArgs "$@" case $OP in pre) ;; post) ;; report) REPORT=() checkNoAuthors checkNoTabs checkNoTrailingSpaces checkLinesLength checkForTestcases total=`expr $authorTags + $tabs + $trailingSpaces + $longLines + $testcases` if [[ $total == 0 ]] ; then echo "{color:green}+1 ${TASKNAME}{color}" >> $SUMMARYFILE else echo "{color:red}-1 ${TASKNAME}{color}" >> $SUMMARYFILE fi for line in "${REPORT[@]}" ; do echo ". ${line}" >> $SUMMARYFILE done ;; esac exit 0 bookkeeper-release-4.2.4/bin/test-patch-08-rat000077500000000000000000000074011244507361200210560ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TASKNAME="RAT" OP="" TEMPDIR="" REPORTDIR="" SUMMARYFILE="" STDOUT="/dev/null" MVNPASSTHRU="" ############################################################################### cleanupAndExit() { exit $1 } ############################################################################### printUsage() { echo "Usage: $0 --taskname | (--op=pre|post|report --tempdir= --reportdir= --summaryfile=) [-D...] [-P...]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --taskname) echo ${TASKNAME} exit 0 ;; --op=*) OP=${i#*=} ;; --tempdir=*) TEMPDIR=${i#*=} ;; --reportdir=*) REPORTDIR=${i#*=} ;; --summaryfile=*) SUMMARYFILE=${i#*=} ;; --verbose) STDOUT="/dev/stdout" ;; -D*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; -P*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; esac done if [[ "${TASKNAME}" == "" || "${OP}" == "" || "${TEMPDIR}" == "" || "${REPORTDIR}" == "" || "${SUMMARYFILE}" == "" ]] ; then echo "Missing options" echo printUsage cleanupAndExit 1 fi if [[ "${OP}" != "pre" && "${OP}" != "post" && "${OP}" != "report" ]] ; then echo "Invalid operation" echo printUsage cleanupAndExit 1 fi } ############################################################################### checkForWarnings() { cleanWarns=`grep -c '\!?????' ${REPORTDIR}/${TASKNAME}-clean.txt` patchWarns=`grep -c '\!?????' ${REPORTDIR}/${TASKNAME}-patch.txt` newWarns=`expr $patchWarns - $cleanWarns` if [[ $newWarns -le 0 ]] ; then REPORT+=("{color:green}+1{color} the patch does not seem to introduce new RAT warnings") newWarns=0 else REPORT+=("{color:red}-1{color} the patch seems to introduce $newWarns new RAT warning(s)") newWarns=1 fi if [[ $cleanWarns != 0 ]] ; then REPORT+=("{color:red}WARNING: the current HEAD has $cleanWarns RAT warning(s), they should be addressed ASAP{color}") fi } ############################################################################### copyRatFiles() { TAG=$1 rm -f ${REPORTDIR}/${TASKNAME}-$TAG.txt for f in $(find . -name rat.txt); do cat $f >> ${REPORTDIR}/${TASKNAME}-$TAG.txt done } ############################################################################### parseArgs "$@" case $OP in pre) mvn apache-rat:check ${MVNPASSTHRU} > $STDOUT copyRatFiles clean ;; post) mvn apache-rat:check ${MVNPASSTHRU} > $STDOUT copyRatFiles patch ;; report) checkForWarnings if [[ $newWarns == 0 ]] ; then echo "{color:green}+1 ${TASKNAME}{color}" >> $SUMMARYFILE else echo "{color:red}-1 ${TASKNAME}{color}" >> $SUMMARYFILE fi for line in "${REPORT[@]}" ; do echo ". ${line}" >> $SUMMARYFILE done ;; esac exit 0 bookkeeper-release-4.2.4/bin/test-patch-09-javadoc000077500000000000000000000072021244507361200216770ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TASKNAME="JAVADOC" OP="" TEMPDIR="" REPORTDIR="" SUMMARYFILE="" MVNPASSTHRU="" ############################################################################### cleanupAndExit() { exit $1 } ############################################################################### printUsage() { echo "Usage: $0 --taskname | (--op=pre|post|report --tempdir= --reportdir= --summaryfile=) [-D...] [-P...]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --taskname) echo ${TASKNAME} exit 0 ;; --op=*) OP=${i#*=} ;; --tempdir=*) TEMPDIR=${i#*=} ;; --reportdir=*) REPORTDIR=${i#*=} ;; --summaryfile=*) SUMMARYFILE=${i#*=} ;; -D*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; -P*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; esac done if [[ "${TASKNAME}" == "" || "${OP}" == "" || "${TEMPDIR}" == "" || "${REPORTDIR}" == "" || "${SUMMARYFILE}" == "" ]] ; then echo "Missing options" echo printUsage cleanupAndExit 1 fi if [[ "${OP}" != "pre" && "${OP}" != "post" && "${OP}" != "report" ]] ; then echo "Invalid operation" echo printUsage cleanupAndExit 1 fi } ############################################################################### checkForWarnings() { cleanWarns=`grep '\[WARNING\]' ${REPORTDIR}/${TASKNAME}-clean.txt | awk '/Javadoc Warnings/,EOF' | grep warning | awk 'BEGIN {total = 0} {total += 1} END {print total}'` patchWarns=`grep '\[WARNING\]' ${REPORTDIR}/${TASKNAME}-patch.txt | awk '/Javadoc Warnings/,EOF' | grep warning | awk 'BEGIN {total = 0} {total += 1} END {print total}'` newWarns=`expr $patchWarns - $cleanWarns` if [[ $newWarns -le 0 ]] ; then REPORT+=("{color:green}+1{color} the patch does not seem to introduce new Javadoc warnings") newWarns=0 else REPORT+=("{color:red}-1{color} the patch seems to introduce $newWarns new Javadoc warning(s)") newWarns=1 fi if [[ $cleanWarns != 0 ]] ; then REPORT+=("{color:red}WARNING{color}: the current HEAD has $cleanWarns Javadoc warning(s)") fi } ############################################################################### parseArgs "$@" case $OP in pre) mvn clean javadoc:aggregate ${MVNPASSTHRU} > ${REPORTDIR}/${TASKNAME}-clean.txt ;; post) mvn clean javadoc:aggregate ${MVNPASSTHRU} > ${REPORTDIR}/${TASKNAME}-patch.txt ;; report) checkForWarnings if [[ $newWarns == 0 ]] ; then echo "{color:green}+1 ${TASKNAME}{color}" >> $SUMMARYFILE else echo "{color:red}-1 ${TASKNAME}{color}" >> $SUMMARYFILE fi for line in "${REPORT[@]}" ; do echo ". ${line}" >> $SUMMARYFILE done ;; esac exit 0 bookkeeper-release-4.2.4/bin/test-patch-10-compile000077500000000000000000000111151244507361200217060ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TASKNAME="COMPILE" OP="" TEMPDIR="" REPORTDIR="" SUMMARYFILE="" STDOUT="/dev/null" MVNPASSTHRU="" ############################################################################### cleanupAndExit() { exit $1 } ############################################################################### printUsage() { echo "Usage: $0 --taskname | (--op=pre|post|report --tempdir= --reportdir= --summaryfile=) [--verbose] [-D...] [-P...]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --taskname) echo ${TASKNAME} exit 0 ;; --op=*) OP=${i#*=} ;; --tempdir=*) TEMPDIR=${i#*=} ;; --reportdir=*) REPORTDIR=${i#*=} ;; --summaryfile=*) SUMMARYFILE=${i#*=} ;; --verbose) STDOUT="/dev/stdout" ;; -D*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; -P*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; esac done if [[ "${TASKNAME}" == "" || "${OP}" == "" || "${TEMPDIR}" == "" || "${REPORTDIR}" == "" || "${SUMMARYFILE}" == "" ]] ; then echo "Missing options" echo printUsage cleanupAndExit 1 fi if [[ "${OP}" != "pre" && "${OP}" != "post" && "${OP}" != "report" ]] ; then echo "Invalid operation" echo printUsage cleanupAndExit 1 fi } ############################################################################### checkForWarnings() { grep '\[WARNING\]' ${REPORTDIR}/${TASKNAME}-clean.txt > ${TEMPDIR}/${TASKNAME}-javacwarns-clean.txt grep '\[WARNING\]' ${REPORTDIR}/${TASKNAME}-patch.txt > ${TEMPDIR}/${TASKNAME}-javacwarns-patch.txt cleanWarns=`cat ${TEMPDIR}/${TASKNAME}-javacwarns-clean.txt | awk 'BEGIN {total = 0} {total += 1} END {print total}'` patchWarns=`cat ${TEMPDIR}/${TASKNAME}-javacwarns-patch.txt | awk 'BEGIN {total = 0} {total += 1} END {print total}'` newWarns=`expr $patchWarns - $cleanWarns` if [[ $newWarns -le 0 ]] ; then REPORT+=("{color:green}+1{color} the patch does not seem to introduce new javac warnings") newWarns=0 else REPORT+=("{color:red}-1{color} the patch seems to introduce $newWarns new javac warning(s)") newWarns=1 fi if [[ $cleanWarns != 0 ]] ; then REPORT+=("{color:red}WARNING{color}: the current HEAD has $cleanWarns javac warning(s)") fi } ############################################################################### parseArgs "$@" case $OP in pre) mvn clean package -DskipTests ${MVNPASSTHRU} | tee ${REPORTDIR}/${TASKNAME}-clean.txt >> $STDOUT if [[ ${PIPESTATUS[0]} == 0 ]] ; then echo "{color:green}+1{color} HEAD compiles" >> ${TEMPDIR}/${TASKNAME}-compile.txt else echo "{color:red}-1{color} HEAD does not compile" >> ${TEMPDIR}/${TASKNAME}-compile.txt fi ;; post) mvn clean package -DskipTests ${MVNPASSTHRU} | tee ${REPORTDIR}/${TASKNAME}-patch.txt >> $STDOUT if [[ ${PIPESTATUS[0]} == 0 ]] ; then echo "{color:green}+1{color} patch compiles" >> ${TEMPDIR}/${TASKNAME}-compile.txt else echo "{color:red}-1{color} patch does not compile" >> ${TEMPDIR}/${TASKNAME}-compile.txt fi ;; report) REPORT=() compileErrors=0 while read line; do REPORT+=("$line") if [[ "$line" =~ "-1" ]] ; then compileErrors=1 fi done < ${TEMPDIR}/${TASKNAME}-compile.txt checkForWarnings total=`expr $compileErrors + $newWarns` if [[ $total == 0 ]] ; then echo "{color:green}+1 ${TASKNAME}{color}" >> $SUMMARYFILE else echo "{color:red}-1 ${TASKNAME}{color}" >> $SUMMARYFILE fi for line in "${REPORT[@]}" ; do echo ". ${line}" >> $SUMMARYFILE done ;; esac exit 0 bookkeeper-release-4.2.4/bin/test-patch-11-findbugs000077500000000000000000000110141244507361200220560ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TASKNAME="FINDBUGS" OP="" TEMPDIR="" REPORTDIR="" SUMMARYFILE="" STDOUT="/dev/null" MVNPASSTHRU="" ############################################################################### cleanupAndExit() { exit $1 } ############################################################################### printUsage() { echo "Usage: $0 --taskname | (--op=pre|post|report --tempdir= --reportdir= --summaryfile=) [-D...] [-P...]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --taskname) echo ${TASKNAME} exit 0 ;; --op=*) OP=${i#*=} ;; --tempdir=*) TEMPDIR=${i#*=} ;; --reportdir=*) REPORTDIR=${i#*=} ;; --summaryfile=*) SUMMARYFILE=${i#*=} ;; --verbose) STDOUT="/dev/stdout" ;; -D*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; -P*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; esac done if [[ "${TASKNAME}" == "" || "${OP}" == "" || "${TEMPDIR}" == "" || "${REPORTDIR}" == "" || "${SUMMARYFILE}" == "" ]] ; then echo "Missing options" echo printUsage cleanupAndExit 1 fi if [[ "${OP}" != "pre" && "${OP}" != "post" && "${OP}" != "report" ]] ; then echo "Invalid operation" echo printUsage cleanupAndExit 1 fi } ############################################################################### checkForWarnings() { cleanBugs=0 patchBugs=0 for m in $(getModules); do MODNAME=$(echo $m | sed 's/\///') m_cleanBugs=$(cat ${REPORTDIR}/${TASKNAME}-${MODNAME}-clean.xml \ | sed 's/<\/BugInstance>/<\/BugInstance>\n/g' | grep BugInstance | wc -l) m_patchBugs=$(cat ${REPORTDIR}/${TASKNAME}-${MODNAME}-patch.xml \ | sed 's/<\/BugInstance>/<\/BugInstance>\n/g' | grep BugInstance | wc -l) m_newBugs=`expr $m_patchBugs - $m_cleanBugs` if [[ $m_newBugs != 0 ]] ; then BUGMODULES="$MODNAME $BUGMODULES" fi cleanBugs=$(($cleanBugs+$m_cleanBugs)) patchBugs=$(($patchBugs+$m_patchBugs)) done BUGMODULES=$(echo $BUGMODULES | sed 's/^ *//' | sed 's/ *$//') newBugs=`expr $patchBugs - $cleanBugs` if [[ $newBugs -le 0 ]] ; then REPORT+=("{color:green}+1{color} the patch does not seem to introduce new Findbugs warnings") newBugs=0 else REPORT+=("{color:red}-1{color} the patch seems to introduce $patchBugs new Findbugs warning(s) in module(s) [$BUGMODULES]") newBugs=1 fi if [[ $cleanBugs != 0 ]] ; then REPORT+=("{color:red}WARNING: the current HEAD has $cleanWarns Findbugs warning(s), they should be addressed ASAP{color}") fi } ############################################################################### getModules() { find . -name pom.xml | sed 's/^.\///' | sed 's/pom.xml$//' | grep -v compat } ############################################################################### copyFindbugsXml() { TAG=$1 for m in $(getModules); do MODNAME=$(echo $m | sed 's/\///') cp ${m}target/findbugsXml.xml ${REPORTDIR}/${TASKNAME}-${MODNAME}-$TAG.xml done } parseArgs "$@" case $OP in pre) mvn findbugs:findbugs ${MVNPASSTHRU} > $STDOUT copyFindbugsXml clean ;; post) mvn findbugs:findbugs ${MVNPASSTHRU} > $STDOUT copyFindbugsXml patch ;; report) checkForWarnings if [[ $newBugs == 0 ]] ; then echo "{color:green}+1 ${TASKNAME}{color}" >> $SUMMARYFILE else echo "{color:red}-1 ${TASKNAME}{color}" >> $SUMMARYFILE fi for line in "${REPORT[@]}" ; do echo ". ${line}" >> $SUMMARYFILE done ;; esac exit 0 bookkeeper-release-4.2.4/bin/test-patch-20-tests000077500000000000000000000105371244507361200214300ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TASKNAME="TESTS" OP="" TEMPDIR="" REPORTDIR="" SUMMARYFILE="" STDOUT="/dev/null" MVNPASSTHRU="" ############################################################################### cleanupAndExit() { exit $1 } ############################################################################### printUsage() { echo "Usage: $0 --taskname | (--op=pre|post|report --tempdir= --reportdir= --summaryfile=) [--verbose] [-D...] [-P...]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --taskname) echo ${TASKNAME} exit 0 ;; --op=*) OP=${i#*=} ;; --tempdir=*) TEMPDIR=${i#*=} ;; --reportdir=*) REPORTDIR=${i#*=} ;; --summaryfile=*) SUMMARYFILE=${i#*=} ;; --verbose) STDOUT="/dev/stdout" ;; -D*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; -P*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; esac done if [[ "${TASKNAME}" == "" || "${OP}" == "" || "${TEMPDIR}" == "" || "${REPORTDIR}" == "" || "${SUMMARYFILE}" == "" ]] ; then echo "Missing options" echo printUsage cleanupAndExit 1 fi if [[ "${OP}" != "pre" && "${OP}" != "post" && "${OP}" != "report" ]] ; then echo "Invalid operation" echo printUsage cleanupAndExit 1 fi } ############################################################################### parseArgs "$@" case $OP in pre) ;; post) # must use package instead of test so that compat-deps shaded jars are correct mvn package ${MVNPASSTHRU} -Dmaven.test.failure.ignore=true \ -Dmaven.test.error.ignore=true -fae \ -Dtest.timeout=7200 | tee ${TEMPDIR}/${TASKNAME}.out >> $STDOUT exitCode=${PIPESTATUS[0]} echo "$exitCode" > ${TEMPDIR}/${TASKNAME}.exitCode ;; report) failedTests=` find . -name '*\.txt' | grep target/surefire-reports | xargs grep "<<< FAILURE" | grep -v "Tests run:" | sed 's/.*\.txt\://' | sed 's/ .*//'` testsRun=`grep "Tests run:" ${TEMPDIR}/${TASKNAME}.out | grep -v " Time elapsed:" | awk '{print $3}' | sed 's/,//' | awk 'BEGIN {count=0} {count=count+$1} END {print count}'` testsFailed=`grep "Tests run:" ${TEMPDIR}/${TASKNAME}.out | grep -v " Time elapsed:" | awk '{print $5}' | sed 's/,//' | awk 'BEGIN {count=0} {count=count+$1} END {print count}'` testsErrors=`grep "Tests run:" ${TEMPDIR}/${TASKNAME}.out | grep -v " Time elapsed:" | awk '{print $7}' | sed 's/,//' | awk 'BEGIN {count=0} {count=count+$1} END {print count}'` hasFailures=`expr $testsFailed + $testsErrors` testsExitCode=`cat ${TEMPDIR}/${TASKNAME}.exitCode` if [[ $hasFailures != 0 ]] ; then echo "{color:red}-1 ${TASKNAME}{color}" >> $SUMMARYFILE echo ". Tests run: $testsRun" >> $SUMMARYFILE echo ". Tests failed: $testsFailed" >> $SUMMARYFILE echo ". Tests errors: $testsErrors" >> $SUMMARYFILE echo "" >> ${SUMMARYFILE} echo ". The patch failed the following testcases:" >> $SUMMARYFILE echo "" >> ${SUMMARYFILE} echo "${failedTests}" | sed 's/^/. /' >> $SUMMARYFILE echo "" >> ${SUMMARYFILE} else if [[ "$testsExitCode" != "0" ]] ; then echo "{color:red}-1 ${TASKNAME}{color} - patch does not compile, cannot run testcases" >> $SUMMARYFILE else echo "{color:green}+1 ${TASKNAME}{color}" >> $SUMMARYFILE echo ". Tests run: $testsRun" >> $SUMMARYFILE fi fi ;; esac exit 0 bookkeeper-release-4.2.4/bin/test-patch-30-dist000077500000000000000000000057141244507361200212330ustar00rootroot00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ "${TESTPATCHDEBUG}" == "true" ] ; then set -x fi BASEDIR=$(pwd) TASKNAME="DISTRO" OP="" TEMPDIR="" REPORTDIR="" SUMMARYFILE="" STDOUT="/dev/null" MVNPASSTHRU="" ############################################################################### cleanupAndExit() { exit $1 } ############################################################################### printUsage() { echo "Usage: $0 --taskname | (--op=pre|post|report --tempdir= --reportdir= --summaryfile=) [--verbose] [-D...] [-P...]" echo } ############################################################################### parseArgs() { for i in $* do case $i in --taskname) echo ${TASKNAME} exit 0 ;; --op=*) OP=${i#*=} ;; --tempdir=*) TEMPDIR=${i#*=} ;; --reportdir=*) REPORTDIR=${i#*=} ;; --summaryfile=*) SUMMARYFILE=${i#*=} ;; --verbose) STDOUT="/dev/stdout" ;; -D*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; -P*) MVNPASSTHRU="${MVNPASSTHRU} $i" ;; esac done if [[ "${TASKNAME}" == "" || "${OP}" == "" || "${TEMPDIR}" == "" || "${REPORTDIR}" == "" || "${SUMMARYFILE}" == "" ]] ; then echo "Missing options" echo printUsage cleanupAndExit 1 fi if [[ "${OP}" != "pre" && "${OP}" != "post" && "${OP}" != "report" ]] ; then echo "Invalid operation" echo printUsage cleanupAndExit 1 fi } ############################################################################### parseArgs "$@" case $OP in pre) ;; post) mvn package assembly:single -DskipTests | tee ${REPORTDIR}/${TASKNAME}.out >> $STDOUT exitCode=${PIPESTATUS[0]} echo "$exitCode" > ${TEMPDIR}/${TASKNAME}.exitCode ;; report) exitCode=`cat ${TEMPDIR}/${TASKNAME}.exitCode` if [[ "$exitCode" != "0" ]] ; then echo "{color:red}-1 ${TASKNAME}{color}" >> $SUMMARYFILE echo ". {color:red}-1{color} distro tarball fails with the patch" >> $SUMMARYFILE else echo "{color:green}+1 ${TASKNAME}{color}" >> $SUMMARYFILE echo ". {color:green}+1{color} distro tarball builds with the patch " >> $SUMMARYFILE fi ;; esac exit 0 bookkeeper-release-4.2.4/bookkeeper-benchmark/000077500000000000000000000000001244507361200213475ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/bin/000077500000000000000000000000001244507361200221175ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/bin/benchmark000077500000000000000000000103371244507361200240030ustar00rootroot00000000000000#!/usr/bin/env bash # #/** # * Copyright 2007 The Apache Software Foundation # * # * Licensed to the Apache Software Foundation (ASF) under one # * or more contributor license agreements. See the NOTICE file # * distributed with this work for additional information # * regarding copyright ownership. The ASF licenses this file # * to you under the Apache License, Version 2.0 (the # * "License"); you may not use this file except in compliance # * with the License. You may obtain a copy of the License at # * # * http://www.apache.org/licenses/LICENSE-2.0 # * # * Unless required by applicable law or agreed to in writing, software # * distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # * See the License for the specific language governing permissions and # * limitations under the License. # */ # check if net.ipv6.bindv6only is set to 1 bindv6only=$(/sbin/sysctl -n net.ipv6.bindv6only 2> /dev/null) if [ -n "$bindv6only" ] && [ "$bindv6only" -eq "1" ] then echo "Error: \"net.ipv6.bindv6only\" is set to 1 - Java networking could be broken" echo "For more info (the following page also applies to bookkeeper): http://wiki.apache.org/hadoop/HadoopIPv6" exit 1 fi BINDIR=`dirname "$0"` BENCH_HOME=`cd $BINDIR/..;pwd` RELEASE_JAR=`ls $BENCH_HOME/bookkeeper-benchmark-*.jar 2> /dev/null | tail -1` if [ $? == 0 ]; then BENCHMARK_JAR=$RELEASE_JAR fi BUILT_JAR=`ls $BENCH_HOME/target/bookkeeper-benchmark-*.jar 2> /dev/null | tail -1` if [ $? != 0 ] && [ ! -e "$BENCHMARK_JAR" ]; then echo "\nCouldn't find benchmark jar."; echo "Make sure you've run 'mvn package'\n"; exit 1; elif [ -e "$BUILT_JAR" ]; then BENCHMARK_JAR=$BUILT_JAR fi benchmark_help() { cat < where command is one of: writes Benchmark throughput and latency for writes reads Benchmark throughput and latency for reads bookie Benchmark an individual bookie help This help message use -help with individual commands for more options. For example, $0 writes -help or command is the full name of a class with a defined main() method. Environment variables: BENCHMARK_LOG_CONF Log4j configuration file (default: conf/log4j.properties) BENCHMARK_EXTRA_OPTS Extra options to be passed to the jvm BENCHMARK_EXTRA_CLASSPATH Add extra paths to the bookkeeper classpath EOF } add_maven_deps_to_classpath() { MVN="mvn" if [ "$MAVEN_HOME" != "" ]; then MVN=${MAVEN_HOME}/bin/mvn fi # Need to generate classpath from maven pom. This is costly so generate it # and cache it. Save the file into our target dir so a mvn clean will get # clean it up and force us create a new one. f="${BENCH_HOME}/target/cached_classpath.txt" if [ ! -f "${f}" ] then ${MVN} -f "${BENCH_HOME}/pom.xml" dependency:build-classpath -Dmdep.outputFile="${f}" &> /dev/null fi BENCHMARK_CLASSPATH=${CLASSPATH}:`cat "${f}"` } if [ -d "$BENCH_HOME/lib" ]; then for i in $BENCH_HOME/lib/*.jar; do BENCHMARK_CLASSPATH=$BENCHMARK_CLASSPATH:$i done else add_maven_deps_to_classpath fi # if no args specified, show usage if [ $# = 0 ]; then benchmark_help; exit 1; fi # get arguments COMMAND=$1 shift BENCHMARK_CLASSPATH="$BENCHMARK_JAR:$BENCHMARK_CLASSPATH:$BENCHMARK_EXTRA_CLASSPATH" BENCHMARK_LOG_CONF=${BENCHMARK_LOG_CONF:-$BENCH_HOME/conf/log4j.properties} if [ "$BENCHMARK_LOG_CONF" != "" ]; then BENCHMARK_CLASSPATH="`dirname $BENCHMARK_LOG_CONF`:$BENCHMARK_CLASSPATH" OPTS="$OPTS -Dlog4j.configuration=`basename $BENCHMARK_LOG_CONF`" fi OPTS="-cp $BENCHMARK_CLASSPATH $OPTS $BENCHMARK_EXTRA_OPTS" OPTS="$OPTS $BENCHMARK_EXTRA_OPTS" # Disable ipv6 as it can cause issues OPTS="$OPTS -Djava.net.preferIPv4Stack=true" if [ $COMMAND == "writes" ]; then exec java $OPTS org.apache.bookkeeper.benchmark.BenchThroughputLatency $@ elif [ $COMMAND == "reads" ]; then exec java $OPTS org.apache.bookkeeper.benchmark.BenchReadThroughputLatency $@ elif [ $COMMAND == "bookie" ]; then exec java $OPTS org.apache.bookkeeper.benchmark.BenchBookie $@ elif [ $COMMAND == "help" ]; then benchmark_help; else exec java $OPTS $COMMAND $@ fi bookkeeper-release-4.2.4/bookkeeper-benchmark/conf/000077500000000000000000000000001244507361200222745ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/conf/log4j.properties000066400000000000000000000052461244507361200254400ustar00rootroot00000000000000# # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # # # Bookkeeper Logging Configuration # # Format is " (, )+ # DEFAULT: console appender only log4j.rootLogger=ERROR, CONSOLE # Example with rolling log file #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE # Example with rolling log file and tracing #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=INFO log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n log4j.logger.org.apache.bookkeeper.benchmark=INFO # # Add ROLLINGFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender log4j.appender.ROLLINGFILE.Threshold=DEBUG log4j.appender.ROLLINGFILE.File=bookkeeper-benchmark.log log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # Max log file size of 10MB log4j.appender.ROLLINGFILE.MaxFileSize=10MB # uncomment the next line to limit number of backup files #log4j.appender.ROLLINGFILE.MaxBackupIndex=10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=bookkeeper_trace.log log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n bookkeeper-release-4.2.4/bookkeeper-benchmark/pom.xml000066400000000000000000000120711244507361200226650ustar00rootroot00000000000000 4.0.0 bookkeeper org.apache.bookkeeper 4.2.4 org.apache.bookkeeper bookkeeper-benchmark bookkeeper-benchmark http://maven.apache.org UTF-8 maven-assembly-plugin 2.2.1 true org.apache.maven.plugins maven-surefire-plugin target/latencyDump.dat junit junit 4.8.1 test org.slf4j slf4j-api 1.6.4 org.slf4j slf4j-log4j12 1.6.4 org.apache.zookeeper zookeeper 3.4.3 jar compile org.apache.zookeeper zookeeper 3.4.3 test-jar test org.jboss.netty netty 3.2.4.Final compile org.apache.bookkeeper bookkeeper-server ${project.parent.version} compile jar org.apache.bookkeeper bookkeeper-server ${project.parent.version} test test-jar log4j log4j 1.2.15 javax.mail mail javax.jms jms com.sun.jdmk jmxtools com.sun.jmx jmxri commons-cli commons-cli 1.2 org.apache.hadoop hadoop-common 0.23.1 compile org.apache.hadoop hadoop-hdfs 0.23.1 compile commons-daemon commons-daemon bookkeeper-release-4.2.4/bookkeeper-benchmark/src/000077500000000000000000000000001244507361200221365ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/000077500000000000000000000000001244507361200230625ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/000077500000000000000000000000001244507361200240035ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/000077500000000000000000000000001244507361200245725ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/apache/000077500000000000000000000000001244507361200260135ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/apache/bookkeeper/000077500000000000000000000000001244507361200301415ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/apache/bookkeeper/benchmark/000077500000000000000000000000001244507361200320735ustar00rootroot00000000000000BenchBookie.java000066400000000000000000000203201244507361200350240ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/apache/bookkeeper/benchmark/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.benchmark; import java.net.InetSocketAddress; import java.util.concurrent.Executors; import java.io.IOException; import org.apache.zookeeper.KeeperException; import org.apache.bookkeeper.proto.BookieClient; import org.apache.bookkeeper.proto.BookieProtocol; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.conf.ClientConfiguration; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBuffers; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.Option; import org.apache.commons.cli.Options; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.CommandLineParser; import org.apache.commons.cli.PosixParser; import org.apache.commons.cli.ParseException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class BenchBookie { static Logger LOG = LoggerFactory.getLogger(BenchBookie.class); static class LatencyCallback implements WriteCallback { boolean complete; @Override synchronized public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { if (rc != 0) { LOG.error("Got error " + rc); } complete = true; notifyAll(); } synchronized public void resetComplete() { complete = false; } synchronized public void waitForComplete() throws InterruptedException { while(!complete) { wait(); } } } static class ThroughputCallback implements WriteCallback { int count; int waitingCount = Integer.MAX_VALUE; synchronized public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { if (rc != 0) { LOG.error("Got error " + rc); } count++; if (count >= waitingCount) { notifyAll(); } } synchronized public void waitFor(int count) throws InterruptedException { while(this.count < count) { waitingCount = count; wait(1000); } waitingCount = Integer.MAX_VALUE; } } private static long getValidLedgerId(String zkServers) throws IOException, BKException, KeeperException, InterruptedException { BookKeeper bkc = null; LedgerHandle lh = null; long id = 0; try { bkc =new BookKeeper(zkServers); lh = bkc.createLedger(1, 1, BookKeeper.DigestType.CRC32, new byte[20]); id = lh.getId(); return id; } finally { if (lh != null) { lh.close(); } if (bkc != null) { bkc.close(); } } } /** * @param args * @throws InterruptedException */ public static void main(String[] args) throws InterruptedException, ParseException, IOException, BKException, KeeperException { Options options = new Options(); options.addOption("host", true, "Hostname or IP of bookie to benchmark"); options.addOption("port", true, "Port of bookie to benchmark (default 3181)"); options.addOption("zookeeper", true, "Zookeeper ensemble, (default \"localhost:2181\")"); options.addOption("size", true, "Size of message to send, in bytes (default 1024)"); options.addOption("help", false, "This message"); CommandLineParser parser = new PosixParser(); CommandLine cmd = parser.parse(options, args); if (cmd.hasOption("help") || !cmd.hasOption("host")) { HelpFormatter formatter = new HelpFormatter(); formatter.printHelp("BenchBookie ", options); System.exit(-1); } String addr = cmd.getOptionValue("host"); int port = Integer.valueOf(cmd.getOptionValue("port", "3181")); int size = Integer.valueOf(cmd.getOptionValue("size", "1024")); String servers = cmd.getOptionValue("zookeeper", "localhost:2181"); ClientSocketChannelFactory channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors .newCachedThreadPool()); OrderedSafeExecutor executor = new OrderedSafeExecutor(1); ClientConfiguration conf = new ClientConfiguration(); BookieClient bc = new BookieClient(conf, channelFactory, executor); LatencyCallback lc = new LatencyCallback(); ThroughputCallback tc = new ThroughputCallback(); int warmUpCount = 999; long ledger = getValidLedgerId(servers); for(long entry = 0; entry < warmUpCount; entry++) { ChannelBuffer toSend = ChannelBuffers.buffer(size); toSend.resetReaderIndex(); toSend.resetWriterIndex(); toSend.writeLong(ledger); toSend.writeLong(entry); toSend.writerIndex(toSend.capacity()); bc.addEntry(new InetSocketAddress(addr, port), ledger, new byte[20], entry, toSend, tc, null, BookieProtocol.FLAG_NONE); } LOG.info("Waiting for warmup"); tc.waitFor(warmUpCount); ledger = getValidLedgerId(servers); LOG.info("Benchmarking latency"); int entryCount = 5000; long startTime = System.nanoTime(); for(long entry = 0; entry < entryCount; entry++) { ChannelBuffer toSend = ChannelBuffers.buffer(size); toSend.resetReaderIndex(); toSend.resetWriterIndex(); toSend.writeLong(ledger); toSend.writeLong(entry); toSend.writerIndex(toSend.capacity()); lc.resetComplete(); bc.addEntry(new InetSocketAddress(addr, port), ledger, new byte[20], entry, toSend, lc, null, BookieProtocol.FLAG_NONE); lc.waitForComplete(); } long endTime = System.nanoTime(); LOG.info("Latency: " + (((double)(endTime-startTime))/((double)entryCount))/1000000.0); entryCount = 50000; ledger = getValidLedgerId(servers); LOG.info("Benchmarking throughput"); startTime = System.currentTimeMillis(); tc = new ThroughputCallback(); for(long entry = 0; entry < entryCount; entry++) { ChannelBuffer toSend = ChannelBuffers.buffer(size); toSend.resetReaderIndex(); toSend.resetWriterIndex(); toSend.writeLong(ledger); toSend.writeLong(entry); toSend.writerIndex(toSend.capacity()); bc.addEntry(new InetSocketAddress(addr, port), ledger, new byte[20], entry, toSend, tc, null, BookieProtocol.FLAG_NONE); } tc.waitFor(entryCount); endTime = System.currentTimeMillis(); LOG.info("Throughput: " + ((long)entryCount)*1000/(endTime-startTime)); bc.close(); channelFactory.releaseExternalResources(); executor.shutdown(); } } BenchReadThroughputLatency.java000066400000000000000000000263401244507361200401110ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/apache/bookkeeper/benchmark/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.benchmark; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher.Event; import java.util.Enumeration; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.ArrayList; import java.util.regex.Pattern; import java.util.regex.Matcher; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.Option; import org.apache.commons.cli.Options; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.CommandLineParser; import org.apache.commons.cli.PosixParser; import org.apache.commons.cli.ParseException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class BenchReadThroughputLatency { static Logger LOG = LoggerFactory.getLogger(BenchReadThroughputLatency.class); private static final Pattern LEDGER_PATTERN = Pattern.compile("L([0-9]+)$"); private static final Comparator ZK_LEDGER_COMPARE = new Comparator() { public int compare(String o1, String o2) { try { Matcher m1 = LEDGER_PATTERN.matcher(o1); Matcher m2 = LEDGER_PATTERN.matcher(o2); if (m1.find() && m2.find()) { return Integer.valueOf(m1.group(1)) - Integer.valueOf(m2.group(1)); } else { return o1.compareTo(o2); } } catch (Throwable t) { return o1.compareTo(o2); } } }; private static void readLedger(ClientConfiguration conf, long ledgerId, byte[] passwd) { LOG.info("Reading ledger {}", ledgerId); BookKeeper bk = null; long time = 0; long entriesRead = 0; long lastRead = 0; int nochange = 0; long absoluteLimit = 5000000; LedgerHandle lh = null; try { bk = new BookKeeper(conf); while (true) { lh = bk.openLedgerNoRecovery(ledgerId, BookKeeper.DigestType.CRC32, passwd); long lastConfirmed = Math.min(lh.getLastAddConfirmed(), absoluteLimit); if (lastConfirmed == lastRead) { nochange++; if (nochange == 10) { break; } else { Thread.sleep(1000); continue; } } else { nochange = 0; } long starttime = System.nanoTime(); while (lastRead < lastConfirmed) { long nextLimit = lastRead + 100000; long readTo = Math.min(nextLimit, lastConfirmed); Enumeration entries = lh.readEntries(lastRead+1, readTo); lastRead = readTo; while (entries.hasMoreElements()) { LedgerEntry e = entries.nextElement(); entriesRead++; if ((entriesRead % 10000) == 0) { LOG.info("{} entries read", entriesRead); } } } long endtime = System.nanoTime(); time += endtime - starttime; lh.close(); lh = null; Thread.sleep(1000); } } catch (InterruptedException ie) { // ignore } catch (Exception e ) { LOG.error("Exception in reader", e); } finally { LOG.info("Read {} in {}ms", entriesRead, time/1000/1000); try { if (lh != null) { lh.close(); } if (bk != null) { bk.close(); } } catch (Exception e) { LOG.error("Exception closing stuff", e); } } } private static void usage(Options options) { HelpFormatter formatter = new HelpFormatter(); formatter.printHelp("BenchReadThroughputLatency ", options); } @SuppressWarnings("deprecation") public static void main(String[] args) throws Exception { Options options = new Options(); options.addOption("ledger", true, "Ledger to read. If empty, read all ledgers which come available. " + " Cannot be used with -listen"); options.addOption("listen", true, "Listen for creation of ledgers, and read each one fully"); options.addOption("password", true, "Password used to access ledgers (default 'benchPasswd')"); options.addOption("zookeeper", true, "Zookeeper ensemble, default \"localhost:2181\""); options.addOption("sockettimeout", true, "Socket timeout for bookkeeper client. In seconds. Default 5"); options.addOption("help", false, "This message"); CommandLineParser parser = new PosixParser(); CommandLine cmd = parser.parse(options, args); if (cmd.hasOption("help")) { usage(options); System.exit(-1); } final String servers = cmd.getOptionValue("zookeeper", "localhost:2181"); final byte[] passwd = cmd.getOptionValue("password", "benchPasswd").getBytes(); final int sockTimeout = Integer.valueOf(cmd.getOptionValue("sockettimeout", "5")); if (cmd.hasOption("ledger") && cmd.hasOption("listen")) { LOG.error("Cannot used -ledger and -listen together"); usage(options); System.exit(-1); } final AtomicInteger ledger = new AtomicInteger(0); final AtomicInteger numLedgers = new AtomicInteger(0); if (cmd.hasOption("ledger")) { ledger.set(Integer.valueOf(cmd.getOptionValue("ledger"))); } else if (cmd.hasOption("listen")) { numLedgers.set(Integer.valueOf(cmd.getOptionValue("listen"))); } else { LOG.error("You must use -ledger or -listen"); usage(options); System.exit(-1); } final CountDownLatch shutdownLatch = new CountDownLatch(1); final CountDownLatch connectedLatch = new CountDownLatch(1); final String nodepath = String.format("/ledgers/L%010d", ledger.get()); final ClientConfiguration conf = new ClientConfiguration(); conf.setReadTimeout(sockTimeout).setZkServers(servers); final ZooKeeper zk = new ZooKeeper(servers, 3000, new Watcher() { public void process(WatchedEvent event) { if (event.getState() == Event.KeeperState.SyncConnected && event.getType() == Event.EventType.None) { connectedLatch.countDown(); } } }); try { zk.register(new Watcher() { public void process(WatchedEvent event) { try { if (event.getState() == Event.KeeperState.SyncConnected && event.getType() == Event.EventType.None) { connectedLatch.countDown(); } else if (event.getType() == Event.EventType.NodeCreated && event.getPath().equals(nodepath)) { readLedger(conf, ledger.get(), passwd); shutdownLatch.countDown(); } else if (event.getType() == Event.EventType.NodeChildrenChanged) { if (numLedgers.get() < 0) { return; } List children = zk.getChildren("/ledgers", true); List ledgers = new ArrayList(); for (String child : children) { if (LEDGER_PATTERN.matcher(child).find()) { ledgers.add(child); } } Collections.sort(ledgers, ZK_LEDGER_COMPARE); String last = ledgers.get(ledgers.size() - 1); final Matcher m = LEDGER_PATTERN.matcher(last); if (m.find()) { int ledgersLeft = numLedgers.decrementAndGet(); Thread t = new Thread() { public void run() { readLedger(conf, Long.valueOf(m.group(1)), passwd); } }; t.start(); if (ledgersLeft <= 0) { shutdownLatch.countDown(); } } else { LOG.error("Cant file ledger id in {}", last); } } else { LOG.warn("Unknown event {}", event); } } catch (Exception e) { LOG.error("Exception in watcher", e); } } }); connectedLatch.await(); if (ledger.get() != 0) { if (zk.exists(nodepath, true) != null) { readLedger(conf, ledger.get(), passwd); shutdownLatch.countDown(); } else { LOG.info("Watching for creation of" + nodepath); } } else { zk.getChildren("/ledgers", true); } shutdownLatch.await(); LOG.info("Shutting down"); } finally { zk.close(); } } }BenchThroughputLatency.java000066400000000000000000000420111244507361200373060ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/apache/bookkeeper/benchmark/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.benchmark; import java.io.BufferedOutputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStream; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.Random; import java.util.concurrent.CountDownLatch; import java.util.concurrent.Semaphore; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import java.util.Timer; import java.util.TimerTask; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.CommandLineParser; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.Options; import org.apache.commons.cli.ParseException; import org.apache.commons.cli.PosixParser; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooDefs; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.Watcher.Event.EventType; import org.apache.zookeeper.Watcher.Event.KeeperState; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class BenchThroughputLatency implements AddCallback, Runnable { static Logger LOG = LoggerFactory.getLogger(BenchThroughputLatency.class); BookKeeper bk; LedgerHandle lh[]; AtomicLong counter; Semaphore sem; int numberOfLedgers = 1; final int sendLimit; final long latencies[]; static class Context { long localStartTime; long id; Context(long id, long time){ this.id = id; this.localStartTime = time; } } public BenchThroughputLatency(int ensemble, int writeQuorumSize, int ackQuorumSize, byte[] passwd, int numberOfLedgers, int sendLimit, ClientConfiguration conf) throws KeeperException, IOException, InterruptedException { this.sem = new Semaphore(conf.getThrottleValue()); bk = new BookKeeper(conf); this.counter = new AtomicLong(0); this.numberOfLedgers = numberOfLedgers; this.sendLimit = sendLimit; this.latencies = new long[sendLimit]; try{ lh = new LedgerHandle[this.numberOfLedgers]; for(int i = 0; i < this.numberOfLedgers; i++) { lh[i] = bk.createLedger(ensemble, writeQuorumSize, ackQuorumSize, BookKeeper.DigestType.CRC32, passwd); LOG.debug("Ledger Handle: " + lh[i].getId()); } } catch (BKException e) { e.printStackTrace(); } } Random rand = new Random(); public void close() throws InterruptedException, BKException { for(int i = 0; i < numberOfLedgers; i++) { lh[i].close(); } bk.close(); } long previous = 0; byte bytes[]; void setEntryData(byte data[]) { bytes = data; } int lastLedger = 0; private int getRandomLedger() { return rand.nextInt(numberOfLedgers); } int latencyIndex = -1; AtomicLong completedRequests = new AtomicLong(0); long duration = -1; synchronized public long getDuration() { return duration; } public void run() { LOG.info("Running..."); long start = previous = System.currentTimeMillis(); int sent = 0; Thread reporter = new Thread() { public void run() { try { while(true) { Thread.sleep(1000); LOG.info("ms: {} req: {}", System.currentTimeMillis(), completedRequests.getAndSet(0)); } } catch (InterruptedException ie) { LOG.info("Caught interrupted exception, going away"); } } }; reporter.start(); long beforeSend = System.nanoTime(); while(!Thread.currentThread().isInterrupted() && sent < sendLimit) { try { sem.acquire(); if (sent == 10000) { long afterSend = System.nanoTime(); long time = afterSend - beforeSend; LOG.info("Time to send first batch: {}s {}ns ", time/1000/1000/1000, time); } } catch (InterruptedException e) { break; } final int index = getRandomLedger(); LedgerHandle h = lh[index]; if (h == null) { LOG.error("Handle " + index + " is null!"); } else { long nanoTime = System.nanoTime(); lh[index].asyncAddEntry(bytes, this, new Context(sent, nanoTime)); counter.incrementAndGet(); } sent++; } LOG.info("Sent: " + sent); try { int i = 0; while(this.counter.get() > 0) { Thread.sleep(1000); i++; if (i > 30) { break; } } } catch(InterruptedException e) { LOG.error("Interrupted while waiting", e); } synchronized(this) { duration = System.currentTimeMillis() - start; } throughput = sent*1000/getDuration(); reporter.interrupt(); try { reporter.join(); } catch (InterruptedException ie) { // ignore } LOG.info("Finished processing in ms: " + getDuration() + " tp = " + throughput); } long throughput = -1; public long getThroughput() { return throughput; } long threshold = 20000; long runningAverageCounter = 0; long totalTime = 0; @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { Context context = (Context) ctx; // we need to use the id passed in the context in the case of // multiple ledgers, and it works even with one ledger entryId = context.id; long newTime = System.nanoTime() - context.localStartTime; sem.release(); counter.decrementAndGet(); if (rc == 0) { latencies[(int)entryId] = newTime; completedRequests.incrementAndGet(); } } @SuppressWarnings("deprecation") public static void main(String[] args) throws KeeperException, IOException, InterruptedException, ParseException, BKException { Options options = new Options(); options.addOption("time", true, "Running time (seconds), default 60"); options.addOption("entrysize", true, "Entry size (bytes), default 1024"); options.addOption("ensemble", true, "Ensemble size, default 3"); options.addOption("quorum", true, "Quorum size, default 2"); options.addOption("ackQuorum", true, "Ack quorum size, default is same as quorum"); options.addOption("throttle", true, "Max outstanding requests, default 10000"); options.addOption("ledgers", true, "Number of ledgers, default 1"); options.addOption("zookeeper", true, "Zookeeper ensemble, default \"localhost:2181\""); options.addOption("password", true, "Password used to create ledgers (default 'benchPasswd')"); options.addOption("coordnode", true, "Coordination znode for multi client benchmarks (optional)"); options.addOption("timeout", true, "Number of seconds after which to give up"); options.addOption("sockettimeout", true, "Socket timeout for bookkeeper client. In seconds. Default 5"); options.addOption("skipwarmup", false, "Skip warm up, default false"); options.addOption("sendlimit", true, "Max number of entries to send. Default 20000000"); options.addOption("latencyFile", true, "File to dump latencies. Default is latencyDump.dat"); options.addOption("help", false, "This message"); CommandLineParser parser = new PosixParser(); CommandLine cmd = parser.parse(options, args); if (cmd.hasOption("help")) { HelpFormatter formatter = new HelpFormatter(); formatter.printHelp("BenchThroughputLatency ", options); System.exit(-1); } long runningTime = Long.valueOf(cmd.getOptionValue("time", "60")); String servers = cmd.getOptionValue("zookeeper", "localhost:2181"); int entrysize = Integer.valueOf(cmd.getOptionValue("entrysize", "1024")); int ledgers = Integer.valueOf(cmd.getOptionValue("ledgers", "1")); int ensemble = Integer.valueOf(cmd.getOptionValue("ensemble", "3")); int quorum = Integer.valueOf(cmd.getOptionValue("quorum", "2")); int ackQuorum = quorum; if (cmd.hasOption("ackQuorum")) { ackQuorum = Integer.valueOf(cmd.getOptionValue("ackQuorum")); } int throttle = Integer.valueOf(cmd.getOptionValue("throttle", "10000")); int sendLimit = Integer.valueOf(cmd.getOptionValue("sendlimit", "20000000")); final int sockTimeout = Integer.valueOf(cmd.getOptionValue("sockettimeout", "5")); String coordinationZnode = cmd.getOptionValue("coordnode"); final byte[] passwd = cmd.getOptionValue("password", "benchPasswd").getBytes(); String latencyFile = cmd.getOptionValue("latencyFile", "latencyDump.dat"); Timer timeouter = new Timer(); if (cmd.hasOption("timeout")) { final long timeout = Long.valueOf(cmd.getOptionValue("timeout", "360")) * 1000; timeouter.schedule(new TimerTask() { public void run() { System.err.println("Timing out benchmark after " + timeout + "ms"); System.exit(-1); } }, timeout); } LOG.warn("(Parameters received) running time: " + runningTime + ", entry size: " + entrysize + ", ensemble size: " + ensemble + ", quorum size: " + quorum + ", throttle: " + throttle + ", number of ledgers: " + ledgers + ", zk servers: " + servers + ", latency file: " + latencyFile); long totalTime = runningTime*1000; // Do a warmup run Thread thread; byte data[] = new byte[entrysize]; Arrays.fill(data, (byte)'x'); ClientConfiguration conf = new ClientConfiguration(); conf.setThrottleValue(throttle).setReadTimeout(sockTimeout).setZkServers(servers); if (!cmd.hasOption("skipwarmup")) { long throughput; LOG.info("Starting warmup"); throughput = warmUp(data, ledgers, ensemble, quorum, passwd, conf); LOG.info("Warmup tp: " + throughput); LOG.info("Warmup phase finished"); } // Now do the benchmark BenchThroughputLatency bench = new BenchThroughputLatency(ensemble, quorum, ackQuorum, passwd, ledgers, sendLimit, conf); bench.setEntryData(data); thread = new Thread(bench); ZooKeeper zk = null; if (coordinationZnode != null) { final CountDownLatch connectLatch = new CountDownLatch(1); zk = new ZooKeeper(servers, 15000, new Watcher() { @Override public void process(WatchedEvent event) { if (event.getState() == KeeperState.SyncConnected) { connectLatch.countDown(); } }}); if (!connectLatch.await(10, TimeUnit.SECONDS)) { LOG.error("Couldn't connect to zookeeper at " + servers); zk.close(); System.exit(-1); } final CountDownLatch latch = new CountDownLatch(1); LOG.info("Waiting for " + coordinationZnode); if (zk.exists(coordinationZnode, new Watcher() { @Override public void process(WatchedEvent event) { if (event.getType() == EventType.NodeCreated) { latch.countDown(); } }}) != null) { latch.countDown(); } latch.await(); LOG.info("Coordination znode created"); } thread.start(); Thread.sleep(totalTime); thread.interrupt(); thread.join(); LOG.info("Calculating percentiles"); int numlat = 0; for(int i = 0; i < bench.latencies.length; i++) { if (bench.latencies[i] > 0) { numlat++; } } int numcompletions = numlat; numlat = Math.min(bench.sendLimit, numlat); long[] latency = new long[numlat]; int j =0; for(int i = 0; i < bench.latencies.length && j < numlat; i++) { if (bench.latencies[i] > 0) { latency[j++] = bench.latencies[i]; } } Arrays.sort(latency); long tp = (long)((double)(numcompletions*1000.0)/(double)bench.getDuration()); LOG.info(numcompletions + " completions in " + bench.getDuration() + " seconds: " + tp + " ops/sec"); if (zk != null) { zk.create(coordinationZnode + "/worker-", ("tp " + tp + " duration " + bench.getDuration()).getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT_SEQUENTIAL); zk.close(); } // dump the latencies for later debugging (it will be sorted by entryid) OutputStream fos = new BufferedOutputStream(new FileOutputStream(latencyFile)); for(Long l: latency) { fos.write((Long.toString(l)+"\t"+(l/1000000)+ "ms\n").getBytes()); } fos.flush(); fos.close(); // now get the latencies LOG.info("99th percentile latency: {}", percentile(latency, 99)); LOG.info("95th percentile latency: {}", percentile(latency, 95)); bench.close(); timeouter.cancel(); } private static double percentile(long[] latency, int percentile) { int size = latency.length; int sampleSize = (size * percentile) / 100; long total = 0; int count = 0; for(int i = 0; i < sampleSize; i++) { total += latency[i]; count++; } return ((double)total/(double)count)/1000000.0; } private static long warmUp(byte[] data, int ledgers, int ensemble, int qSize, byte[] passwd, ClientConfiguration conf) throws KeeperException, IOException, InterruptedException, BKException { final CountDownLatch connectLatch = new CountDownLatch(1); final int bookies; String bookieRegistrationPath = conf.getZkAvailableBookiesPath(); ZooKeeper zk = null; try { final String servers = conf.getZkServers(); zk = new ZooKeeper(servers, 15000, new Watcher() { @Override public void process(WatchedEvent event) { if (event.getState() == KeeperState.SyncConnected) { connectLatch.countDown(); } }}); if (!connectLatch.await(10, TimeUnit.SECONDS)) { LOG.error("Couldn't connect to zookeeper at " + servers); throw new IOException("Couldn't connect to zookeeper " + servers); } bookies = zk.getChildren(bookieRegistrationPath, false).size(); } finally { if (zk != null) { zk.close(); } } BenchThroughputLatency warmup = new BenchThroughputLatency(bookies, bookies, bookies, passwd, ledgers, 10000, conf); warmup.setEntryData(data); Thread thread = new Thread(warmup); thread.start(); thread.join(); warmup.close(); return warmup.getThroughput(); } } MySqlClient.java000066400000000000000000000120021244507361200350560ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/apache/bookkeeper/benchmarkpackage org.apache.bookkeeper.benchmark; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.FileOutputStream; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.sql.Statement; import java.util.HashMap; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerHandle; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.KeeperException; public class MySqlClient { static Logger LOG = LoggerFactory.getLogger(MySqlClient.class); BookKeeper x; LedgerHandle lh; Integer entryId; HashMap map; FileOutputStream fStream; FileOutputStream fStreamLocal; long start, lastId; Connection con; Statement stmt; public MySqlClient(String hostport, String user, String pass) throws ClassNotFoundException { entryId = 0; map = new HashMap(); Class.forName("com.mysql.jdbc.Driver"); // database is named "bookkeeper" String url = "jdbc:mysql://" + hostport + "/bookkeeper"; try { con = DriverManager.getConnection(url, user, pass); stmt = con.createStatement(); // drop table and recreate it stmt.execute("DROP TABLE IF EXISTS data;"); stmt.execute("create table data(transaction_id bigint PRIMARY KEY AUTO_INCREMENT, content TEXT);"); LOG.info("Database initialization terminated"); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void closeHandle() throws KeeperException, InterruptedException, SQLException { con.close(); } /** * First parameter is an integer defining the length of the message * Second parameter is the number of writes * Third parameter is host:port * Fourth parameter is username * Fifth parameter is password * @param args * @throws ClassNotFoundException * @throws SQLException */ public static void main(String[] args) throws ClassNotFoundException, SQLException { int lenght = Integer.parseInt(args[1]); StringBuilder sb = new StringBuilder(); while(lenght-- > 0) { sb.append('a'); } try { MySqlClient c = new MySqlClient(args[2], args[3], args[4]); c.writeSameEntryBatch(sb.toString().getBytes(), Integer.parseInt(args[0])); c.writeSameEntry(sb.toString().getBytes(), Integer.parseInt(args[0])); c.closeHandle(); } catch (NumberFormatException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } catch (KeeperException e) { e.printStackTrace(); } } /** * Adds data entry to the DB * @param data the entry to be written, given as a byte array * @param times the number of times the entry should be written on the DB */ void writeSameEntryBatch(byte[] data, int times) throws InterruptedException, SQLException { start = System.currentTimeMillis(); int count = times; String content = new String(data); System.out.println("Data: " + content + ", " + data.length); while(count-- > 0) { stmt.addBatch("insert into data(content) values(\"" + content + "\");"); } LOG.info("Finished writing batch SQL command in ms: " + (System.currentTimeMillis() - start)); start = System.currentTimeMillis(); stmt.executeBatch(); System.out.println("Finished " + times + " writes in ms: " + (System.currentTimeMillis() - start)); LOG.info("Ended computation"); } void writeSameEntry(byte[] data, int times) throws InterruptedException, SQLException { start = System.currentTimeMillis(); int count = times; String content = new String(data); System.out.println("Data: " + content + ", " + data.length); while(count-- > 0) { stmt.executeUpdate("insert into data(content) values(\"" + content + "\");"); } System.out.println("Finished " + times + " writes in ms: " + (System.currentTimeMillis() - start)); LOG.info("Ended computation"); } } TestClient.java000066400000000000000000000343751244507361200347510ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/main/java/org/apache/bookkeeper/benchmark/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.benchmark; import java.io.FileOutputStream; import java.io.IOException; import java.util.ArrayList; import java.util.List; import java.util.Random; import java.util.Timer; import java.util.TimerTask; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.CommandLineParser; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.Options; import org.apache.commons.cli.ParseException; import org.apache.commons.cli.PosixParser; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.zookeeper.KeeperException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This is a simple test program to compare the performance of writing to * BookKeeper and to the local file system. * */ public class TestClient { private static final Logger LOG = LoggerFactory.getLogger(TestClient.class); /** * First says if entries should be written to BookKeeper (0) or to the local * disk (1). Second parameter is an integer defining the length of a ledger entry. * Third parameter is the number of writes. * * @param args */ public static void main(String[] args) throws ParseException { Options options = new Options(); options.addOption("length", true, "Length of packets being written. Default 1024"); options.addOption("target", true, "Target medium to write to. Options are bk, fs & hdfs. Default fs"); options.addOption("runfor", true, "Number of seconds to run for. Default 60"); options.addOption("path", true, "Path to write to. fs & hdfs only. Default /foobar"); options.addOption("zkservers", true, "ZooKeeper servers, comma separated. bk only. Default localhost:2181."); options.addOption("bkensemble", true, "BookKeeper ledger ensemble size. bk only. Default 3"); options.addOption("bkquorum", true, "BookKeeper ledger quorum size. bk only. Default 2"); options.addOption("bkthrottle", true, "BookKeeper throttle size. bk only. Default 10000"); options.addOption("sync", false, "Use synchronous writes with BookKeeper. bk only."); options.addOption("numconcurrent", true, "Number of concurrently clients. Default 1"); options.addOption("timeout", true, "Number of seconds after which to give up"); options.addOption("help", false, "This message"); CommandLineParser parser = new PosixParser(); CommandLine cmd = parser.parse(options, args); if (cmd.hasOption("help")) { HelpFormatter formatter = new HelpFormatter(); formatter.printHelp("TestClient ", options); System.exit(-1); } int length = Integer.valueOf(cmd.getOptionValue("length", "1024")); String target = cmd.getOptionValue("target", "fs"); long runfor = Long.valueOf(cmd.getOptionValue("runfor", "60")) * 1000; StringBuilder sb = new StringBuilder(); while(length-- > 0) { sb.append('a'); } Timer timeouter = new Timer(); if (cmd.hasOption("timeout")) { final long timeout = Long.valueOf(cmd.getOptionValue("timeout", "360")) * 1000; timeouter.schedule(new TimerTask() { public void run() { System.err.println("Timing out benchmark after " + timeout + "ms"); System.exit(-1); } }, timeout); } BookKeeper bkc = null; try { int numFiles = Integer.valueOf(cmd.getOptionValue("numconcurrent", "1")); int numThreads = Math.min(numFiles, 1000); byte[] data = sb.toString().getBytes(); long runid = System.currentTimeMillis(); List> clients = new ArrayList>(); if (target.equals("bk")) { String zkservers = cmd.getOptionValue("zkservers", "localhost:2181"); int bkensemble = Integer.valueOf(cmd.getOptionValue("bkensemble", "3")); int bkquorum = Integer.valueOf(cmd.getOptionValue("bkquorum", "2")); int bkthrottle = Integer.valueOf(cmd.getOptionValue("bkthrottle", "10000")); ClientConfiguration conf = new ClientConfiguration(); conf.setThrottleValue(bkthrottle); conf.setZkServers(zkservers); bkc = new BookKeeper(conf); List handles = new ArrayList(); for (int i = 0; i < numFiles; i++) { handles.add(bkc.createLedger(bkensemble, bkquorum, DigestType.CRC32, new byte[] {'a', 'b'})); } for (int i = 0; i < numFiles; i++) { clients.add(new BKClient(handles, data, runfor, cmd.hasOption("sync"))); } } else if (target.equals("hdfs")) { FileSystem fs = FileSystem.get(new Configuration()); LOG.info("Default replication for HDFS: {}", fs.getDefaultReplication()); List streams = new ArrayList(); for (int i = 0; i < numFiles; i++) { String path = cmd.getOptionValue("path", "/foobar"); streams.add(fs.create(new Path(path + runid + "_" + i))); } for (int i = 0; i < numThreads; i++) { clients.add(new HDFSClient(streams, data, runfor)); } } else if (target.equals("fs")) { List streams = new ArrayList(); for (int i = 0; i < numFiles; i++) { String path = cmd.getOptionValue("path", "/foobar " + i); streams.add(new FileOutputStream(path + runid + "_" + i)); } for (int i = 0; i < numThreads; i++) { clients.add(new FileClient(streams, data, runfor)); } } else { LOG.error("Unknown option: " + target); throw new IllegalArgumentException("Unknown target " + target); } ExecutorService executor = Executors.newFixedThreadPool(numThreads); long start = System.currentTimeMillis(); List> results = executor.invokeAll(clients, 10, TimeUnit.MINUTES); long end = System.currentTimeMillis(); long count = 0; for (Future r : results) { if (!r.isDone()) { LOG.warn("Job didn't complete"); System.exit(2); } long c = r.get(); if (c == 0) { LOG.warn("Task didn't complete"); } count += c; } long time = end-start; LOG.info("Finished processing writes (ms): {} TPT: {} op/s", time, count/((double)time/1000)); executor.shutdown(); } catch (ExecutionException ee) { LOG.error("Exception in worker", ee); } catch (KeeperException ke) { LOG.error("Error accessing zookeeper", ke); } catch (BKException e) { LOG.error("Error accessing bookkeeper", e); } catch (IOException ioe) { LOG.error("I/O exception during benchmark", ioe); } catch (InterruptedException ie) { LOG.error("Benchmark interrupted", ie); } finally { if (bkc != null) { try { bkc.close(); } catch (BKException bke) { LOG.error("Error closing bookkeeper client", bke); } catch (InterruptedException ie) { LOG.warn("Interrupted closing bookkeeper client", ie); } } } timeouter.cancel(); } static class HDFSClient implements Callable { final List streams; final byte[] data; final long time; final Random r; HDFSClient(List streams, byte[] data, long time) { this.streams = streams; this.data = data; this.time = time; this.r = new Random(System.identityHashCode(this)); } public Long call() { try { long count = 0; long start = System.currentTimeMillis(); long stopat = start + time; while(System.currentTimeMillis() < stopat) { FSDataOutputStream stream = streams.get(r.nextInt(streams.size())); synchronized(stream) { stream.write(data); stream.flush(); stream.hflush(); } count++; } long time = (System.currentTimeMillis() - start); LOG.info("Worker finished processing writes (ms): {} TPT: {} op/s", time, count/((double)time/1000)); return count; } catch(IOException ioe) { LOG.error("Exception in worker thread", ioe); return 0L; } } } static class FileClient implements Callable { final List streams; final byte[] data; final long time; final Random r; FileClient(List streams, byte[] data, long time) { this.streams = streams; this.data = data; this.time = time; this.r = new Random(System.identityHashCode(this)); } public Long call() { try { long count = 0; long start = System.currentTimeMillis(); long stopat = start + time; while(System.currentTimeMillis() < stopat) { FileOutputStream stream = streams.get(r.nextInt(streams.size())); synchronized(stream) { stream.write(data); stream.flush(); stream.getChannel().force(false); } count++; } long time = (System.currentTimeMillis() - start); LOG.info("Worker finished processing writes (ms): {} TPT: {} op/s", time, count/((double)time/1000)); return count; } catch(IOException ioe) { LOG.error("Exception in worker thread", ioe); return 0L; } } } static class BKClient implements Callable, AddCallback { final List handles; final byte[] data; final long time; final Random r; final boolean sync; final AtomicLong success = new AtomicLong(0); final AtomicLong outstanding = new AtomicLong(0); BKClient(List handles, byte[] data, long time, boolean sync) { this.handles = handles; this.data = data; this.time = time; this.r = new Random(System.identityHashCode(this)); this.sync = sync; } public Long call() { try { long start = System.currentTimeMillis(); long stopat = start + time; while(System.currentTimeMillis() < stopat) { LedgerHandle lh = handles.get(r.nextInt(handles.size())); if (sync) { lh.addEntry(data); success.incrementAndGet(); } else { lh.asyncAddEntry(data, this, null); outstanding.incrementAndGet(); } } int ticks = 10; // don't wait for more than 10 seconds while (outstanding.get() > 0 && ticks-- > 0) { Thread.sleep(10); } long time = (System.currentTimeMillis() - start); LOG.info("Worker finished processing writes (ms): {} TPT: {} op/s", time, success.get()/((double)time/1000)); return success.get(); } catch (BKException e) { LOG.error("Exception in worker thread", e); return 0L; } catch (InterruptedException ie) { LOG.error("Exception in worker thread", ie); return 0L; } } @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { if (rc == BKException.Code.OK) { success.incrementAndGet(); } outstanding.decrementAndGet(); } } } bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/000077500000000000000000000000001244507361200231155ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/java/000077500000000000000000000000001244507361200240365ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/java/org/000077500000000000000000000000001244507361200246255ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/java/org/apache/000077500000000000000000000000001244507361200260465ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/java/org/apache/bookkeeper/000077500000000000000000000000001244507361200301745ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/java/org/apache/bookkeeper/benchmark/000077500000000000000000000000001244507361200321265ustar00rootroot00000000000000TestBenchmark.java000066400000000000000000000135611244507361200354520ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/java/org/apache/bookkeeper/benchmark/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.benchmark; import org.junit.BeforeClass; import org.junit.AfterClass; import org.junit.Test; import org.junit.Assert; import java.net.InetSocketAddress; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.util.LocalBookKeeper; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.Arrays; import java.util.List; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.Watcher.Event.EventType; import org.apache.zookeeper.Watcher.Event.KeeperState; public class TestBenchmark extends BookKeeperClusterTestCase { protected static final Logger LOG = LoggerFactory.getLogger(TestBenchmark.class); public TestBenchmark() { super(5); } @Test(timeout=60000) public void testThroughputLatency() throws Exception { String latencyFile = System.getProperty("test.latency.file", "latencyDump.dat"); BenchThroughputLatency.main(new String[] { "--zookeeper", zkUtil.getZooKeeperConnectString(), "--time", "10", "--skipwarmup", "--throttle", "1", "--sendlimit", "10000", "--latencyFile", latencyFile }); } @Test(timeout=60000) public void testBookie() throws Exception { InetSocketAddress bookie = getBookie(0); BenchBookie.main(new String[] { "--host", bookie.getHostName(), "--port", String.valueOf(bookie.getPort()), "--zookeeper", zkUtil.getZooKeeperConnectString() }); } @Test(timeout=60000) public void testReadThroughputLatency() throws Exception { final AtomicBoolean threwException = new AtomicBoolean(false); Thread t = new Thread() { public void run() { try { BenchReadThroughputLatency.main(new String[] { "--zookeeper", zkUtil.getZooKeeperConnectString(), "--listen", "10"}); } catch (Throwable t) { LOG.error("Error reading", t); threwException.set(true); } } }; t.start(); Thread.sleep(10000); byte data[] = new byte[1024]; Arrays.fill(data, (byte)'x'); long lastLedgerId = 0; Assert.assertTrue("Thread should be running", t.isAlive()); for (int i = 0; i < 10; i++) { BookKeeper bk = new BookKeeper(zkUtil.getZooKeeperConnectString()); LedgerHandle lh = bk.createLedger(BookKeeper.DigestType.CRC32, "benchPasswd".getBytes()); lastLedgerId = lh.getId(); try { for (int j = 0; j < 100; j++) { lh.addEntry(data); } } finally { lh.close(); bk.close(); } } for (int i = 0; i < 60; i++) { if (!t.isAlive()) { break; } Thread.sleep(1000); // wait for 10 seconds for reading to finish } Assert.assertFalse("Thread should be finished", t.isAlive()); BenchReadThroughputLatency.main(new String[] { "--zookeeper", zkUtil.getZooKeeperConnectString(), "--ledger", String.valueOf(lastLedgerId)}); final long nextLedgerId = lastLedgerId+1; t = new Thread() { public void run() { try { BenchReadThroughputLatency.main(new String[] { "--zookeeper", zkUtil.getZooKeeperConnectString(), "--ledger", String.valueOf(nextLedgerId)}); } catch (Throwable t) { LOG.error("Error reading", t); threwException.set(true); } } }; t.start(); Assert.assertTrue("Thread should be running", t.isAlive()); BookKeeper bk = new BookKeeper(zkUtil.getZooKeeperConnectString()); LedgerHandle lh = bk.createLedger(BookKeeper.DigestType.CRC32, "benchPasswd".getBytes()); try { for (int j = 0; j < 100; j++) { lh.addEntry(data); } } finally { lh.close(); bk.close(); } for (int i = 0; i < 60; i++) { if (!t.isAlive()) { break; } Thread.sleep(1000); // wait for 10 seconds for reading to finish } Assert.assertFalse("Thread should be finished", t.isAlive()); Assert.assertFalse("A thread has thrown an exception, check logs", threwException.get()); } } bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/resources/000077500000000000000000000000001244507361200251275ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-benchmark/src/test/resources/log4j.properties000066400000000000000000000052331244507361200302670ustar00rootroot00000000000000# # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # # # Bookkeeper Logging Configuration # # Format is " (, )+ # DEFAULT: console appender only log4j.rootLogger=INFO, CONSOLE # Example with rolling log file #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE # Example with rolling log file and tracing #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE log4j.logger.org.apache.zookeeper=ERROR # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=INFO log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # # Add ROLLINGFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender log4j.appender.ROLLINGFILE.Threshold=DEBUG log4j.appender.ROLLINGFILE.File=bookkeeper-benchmark.log log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # Max log file size of 10MB log4j.appender.ROLLINGFILE.MaxFileSize=10MB # uncomment the next line to limit number of backup files #log4j.appender.ROLLINGFILE.MaxBackupIndex=10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=bookkeeper_trace.log log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n bookkeeper-release-4.2.4/bookkeeper-server/000077500000000000000000000000001244507361200207235ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/bin/000077500000000000000000000000001244507361200214735ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/bin/bookkeeper000077500000000000000000000150101244507361200235440ustar00rootroot00000000000000#!/usr/bin/env bash # #/** # * Copyright 2007 The Apache Software Foundation # * # * Licensed to the Apache Software Foundation (ASF) under one # * or more contributor license agreements. See the NOTICE file # * distributed with this work for additional information # * regarding copyright ownership. The ASF licenses this file # * to you under the Apache License, Version 2.0 (the # * "License"); you may not use this file except in compliance # * with the License. You may obtain a copy of the License at # * # * http://www.apache.org/licenses/LICENSE-2.0 # * # * Unless required by applicable law or agreed to in writing, software # * distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # * See the License for the specific language governing permissions and # * limitations under the License. # */ # check if net.ipv6.bindv6only is set to 1 bindv6only=$(/sbin/sysctl -n net.ipv6.bindv6only 2> /dev/null) if [ -n "$bindv6only" ] && [ "$bindv6only" -eq "1" ] then echo "Error: \"net.ipv6.bindv6only\" is set to 1 - Java networking could be broken" echo "For more info (the following page also applies to bookkeeper): http://wiki.apache.org/hadoop/HadoopIPv6" exit 1 fi # See the following page for extensive details on setting # up the JVM to accept JMX remote management: # http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html # by default we allow local JMX connections if [ "x$JMXLOCALONLY" = "x" ] then JMXLOCALONLY=false fi if [ "x$JMXDISABLE" = "x" ] then echo "JMX enabled by default" >&2 # for some reason these two options are necessary on jdk6 on Ubuntu # accord to the docs they are not necessary, but otw jconsole cannot # do a local attach JMX_ARGS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY" else echo "JMX disabled by user request" >&2 fi BINDIR=`dirname "$0"` BK_HOME=`cd $BINDIR/..;pwd` DEFAULT_CONF=$BK_HOME/conf/bk_server.conf DEFAULT_LOG_CONF=$BK_HOME/conf/log4j.properties source $BK_HOME/conf/bkenv.sh # Check for the java to use if [[ -z $JAVA_HOME ]]; then JAVA=$(which java) if [ $? = 0 ]; then echo "JAVA_HOME not set, using java from PATH. ($JAVA)" else echo "Error: JAVA_HOME not set, and no java executable found in $PATH." 1>&2 exit 1 fi else JAVA=$JAVA_HOME/bin/java fi # exclude tests jar RELEASE_JAR=`ls $BK_HOME/bookkeeper-server-*.jar 2> /dev/null | grep -v tests | tail -1` if [ $? == 0 ]; then BOOKIE_JAR=$RELEASE_JAR fi # exclude tests jar BUILT_JAR=`ls $BK_HOME/target/bookkeeper-server-*.jar 2> /dev/null | grep -v tests | tail -1` if [ $? != 0 ] && [ ! -e "$BOOKIE_JAR" ]; then echo "\nCouldn't find bookkeeper jar."; echo "Make sure you've run 'mvn package'\n"; exit 1; elif [ -e "$BUILT_JAR" ]; then BOOKIE_JAR=$BUILT_JAR fi bookkeeper_help() { cat < where command is one of: bookie Run a bookie server autorecovery Run AutoRecovery service daemon localbookie Run a test ensemble of bookies locally upgrade Upgrade bookie filesystem shell Run shell for admin commands help This help message or command is the full name of a class with a defined main() method. Environment variables: BOOKIE_LOG_CONF Log4j configuration file (default $DEFAULT_LOG_CONF) BOOKIE_CONF Configuration file (default: $DEFAULT_CONF) BOOKIE_EXTRA_OPTS Extra options to be passed to the jvm BOOKIE_EXTRA_CLASSPATH Add extra paths to the bookkeeper classpath ENTRY_FORMATTER_CLASS Entry formatter class to format entries. These variable can also be set in conf/bkenv.sh EOF } add_maven_deps_to_classpath() { MVN="mvn" if [ "$MAVEN_HOME" != "" ]; then MVN=${MAVEN_HOME}/bin/mvn fi # Need to generate classpath from maven pom. This is costly so generate it # and cache it. Save the file into our target dir so a mvn clean will get # clean it up and force us create a new one. f="${BK_HOME}/target/cached_classpath.txt" if [ ! -f "${f}" ] then ${MVN} -f "${BK_HOME}/pom.xml" dependency:build-classpath -Dmdep.outputFile="${f}" &> /dev/null fi BOOKIE_CLASSPATH=${CLASSPATH}:`cat "${f}"` } if [ -d "$BK_HOME/lib" ]; then for i in $BK_HOME/lib/*.jar; do BOOKIE_CLASSPATH=$BOOKIE_CLASSPATH:$i done else add_maven_deps_to_classpath fi # if no args specified, show usage if [ $# = 0 ]; then bookkeeper_help; exit 1; fi # get arguments COMMAND=$1 shift if [ $COMMAND == "shell" ]; then DEFAULT_LOG_CONF=$BK_HOME/conf/log4j.shell.properties fi if [ -z "$BOOKIE_CONF" ]; then BOOKIE_CONF=$DEFAULT_CONF fi if [ -z "$BOOKIE_LOG_CONF" ]; then BOOKIE_LOG_CONF=$DEFAULT_LOG_CONF fi BOOKIE_CLASSPATH="$BOOKIE_JAR:$BOOKIE_CLASSPATH:$BOOKIE_EXTRA_CLASSPATH" BOOKIE_CLASSPATH="`dirname $BOOKIE_LOG_CONF`:$BOOKIE_CLASSPATH" OPTS="$OPTS -Dlog4j.configuration=`basename $BOOKIE_LOG_CONF`" OPTS="-cp $BOOKIE_CLASSPATH $OPTS" OPTS="$OPTS $BOOKIE_EXTRA_OPTS" # Disable ipv6 as it can cause issues OPTS="$OPTS -Djava.net.preferIPv4Stack=true" # log directory & file BOOKIE_ROOT_LOGGER=${BOOKIE_ROOT_LOGGER:-"INFO,CONSOLE"} BOOKIE_LOG_DIR=${BOOKIE_LOG_DIR:-"$BK_HOME/logs"} BOOKIE_LOG_FILE=${BOOKIE_LOG_FILE:-"bookkeeper-server.log"} #Configure log configuration system properties OPTS="$OPTS -Dbookkeeper.root.logger=$BOOKIE_ROOT_LOGGER" OPTS="$OPTS -Dbookkeeper.log.dir=$BOOKIE_LOG_DIR" OPTS="$OPTS -Dbookkeeper.log.file=$BOOKIE_LOG_FILE" #Change to BK_HOME to support relative paths cd "$BK_HOME" if [ $COMMAND == "bookie" ]; then exec $JAVA $OPTS $JMX_ARGS org.apache.bookkeeper.proto.BookieServer --conf $BOOKIE_CONF $@ elif [ $COMMAND == "autorecovery" ]; then exec $JAVA $OPTS $JMX_ARGS org.apache.bookkeeper.replication.AutoRecoveryMain --conf $BOOKIE_CONF $@ elif [ $COMMAND == "localbookie" ]; then NUMBER=$1 shift exec $JAVA $OPTS $JMX_ARGS org.apache.bookkeeper.util.LocalBookKeeper $NUMBER $BOOKIE_CONF $@ elif [ $COMMAND == "upgrade" ]; then exec $JAVA $OPTS org.apache.bookkeeper.bookie.FileSystemUpgrade --conf $BOOKIE_CONF $@ elif [ $COMMAND == "shell" ]; then ENTRY_FORMATTER_ARG="-DentryFormatterClass=${ENTRY_FORMATTER_CLASS:-org.apache.bookkeeper.util.StringEntryFormatter}" exec $JAVA $OPTS $ENTRY_FORMATTER_ARG org.apache.bookkeeper.bookie.BookieShell -conf $BOOKIE_CONF $@ elif [ $COMMAND == "help" ]; then bookkeeper_help; else exec $JAVA $OPTS $COMMAND $@ fi bookkeeper-release-4.2.4/bookkeeper-server/bin/bookkeeper-daemon.sh000077500000000000000000000105451244507361200254260ustar00rootroot00000000000000#!/usr/bin/env bash # #/** # * Licensed to the Apache Software Foundation (ASF) under one # * or more contributor license agreements. See the NOTICE file # * distributed with this work for additional information # * regarding copyright ownership. The ASF licenses this file # * to you under the Apache License, Version 2.0 (the # * "License"); you may not use this file except in compliance # * with the License. You may obtain a copy of the License at # * # * http://www.apache.org/licenses/LICENSE-2.0 # * # * Unless required by applicable law or agreed to in writing, software # * distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # * See the License for the specific language governing permissions and # * limitations under the License. # */ usage() { cat < where command is one of: bookie Run the bookie server where argument is one of: -force (accepted only with stop command): Decides whether to stop the Bookie Server forcefully if not stopped by normal shutdown EOF } BINDIR=`dirname "$0"` BK_HOME=`cd $BINDIR/..;pwd` if [ -f $BK_HOME/conf/bkenv.sh ] then . $BK_HOME/conf/bkenv.sh fi BOOKIE_LOG_DIR=${BOOKIE_LOG_DIR:-"$BK_HOME/logs"} BOOKIE_ROOT_LOGGER=${BOOKIE_ROOT_LOGGER:-'INFO,ROLLINGFILE'} BOOKIE_STOP_TIMEOUT=${BOOKIE_STOP_TIMEOUT:-30} BOOKIE_PID_DIR=${BOOKIE_PID_DIR:-$BK_HOME/bin} if [ $# -lt 2 ] then echo "Error: no enough arguments provided." usage exit 1 fi startStop=$1 shift command=$1 shift case $command in (bookie) echo "doing $startStop $command ..." ;; (autorecovery) echo "doing $startStop $command ..." ;; (*) echo "Error: unknown service name $command" usage exit 1 ;; esac export BOOKIE_LOG_DIR=$BOOKIE_LOG_DIR export BOOKIE_ROOT_LOGGER=$BOOKIE_ROOT_LOGGER export BOOKIE_LOG_FILE=bookkeeper-$command-$HOSTNAME.log pid=$BOOKIE_PID_DIR/bookkeeper-$command.pid out=$BOOKIE_LOG_DIR/bookkeeper-$command-$HOSTNAME.out logfile=$BOOKIE_LOG_DIR/$BOOKIE_LOG_FILE rotate_out_log () { log=$1; num=5; if [ -n "$2" ]; then num=$2 fi if [ -f "$log" ]; then # rotate logs while [ $num -gt 1 ]; do prev=`expr $num - 1` [ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num" num=$prev done mv "$log" "$log.$num"; fi } mkdir -p "$BOOKIE_LOG_DIR" case $startStop in (start) if [ -f $pid ]; then if kill -0 `cat $pid` > /dev/null 2>&1; then echo $command running as process `cat $pid`. Stop it first. exit 1 fi fi rotate_out_log $out echo starting $command, logging to $logfile bookkeeper=$BK_HOME/bin/bookkeeper nohup $bookkeeper $command "$@" > "$out" 2>&1 < /dev/null & echo $! > $pid sleep 1; head $out sleep 2; if ! ps -p $! > /dev/null ; then exit 1 fi ;; (stop) if [ -f $pid ]; then TARGET_PID=`cat $pid` if kill -0 $TARGET_PID > /dev/null 2>&1; then echo stopping $command kill $TARGET_PID count=0 location=$BOOKIE_LOG_DIR while ps -p $TARGET_PID > /dev/null; do echo "Shutdown is in progress... Please wait..." sleep 1 count=`expr $count + 1` if [ "$count" = "$BOOKIE_STOP_TIMEOUT" ]; then break fi done if [ "$count" != "$BOOKIE_STOP_TIMEOUT" ]; then echo "Shutdown completed." fi if kill -0 $TARGET_PID > /dev/null 2>&1; then fileName=$location/$command.out $JAVA_HOME/bin/jstack $TARGET_PID > $fileName echo Thread dumps are taken for analysis at $fileName if [ "$1" == "-force" ] then echo forcefully stopping $command kill -9 $TARGET_PID >/dev/null 2>&1 echo Successfully stopped the process else echo "WARNNING : Bookie Server is not stopped completely." exit 1 fi fi else echo no $command to stop fi rm $pid else echo no $command to stop fi ;; (*) usage echo $supportedargs exit 1 ;; esac bookkeeper-release-4.2.4/bookkeeper-server/conf/000077500000000000000000000000001244507361200216505ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/conf/bk_server.conf000066400000000000000000000231511244507361200245030ustar00rootroot00000000000000#!/bin/sh # #/** # * Copyright 2007 The Apache Software Foundation # * # * Licensed to the Apache Software Foundation (ASF) under one # * or more contributor license agreements. See the NOTICE file # * distributed with this work for additional information # * regarding copyright ownership. The ASF licenses this file # * to you under the Apache License, Version 2.0 (the # * "License"); you may not use this file except in compliance # * with the License. You may obtain a copy of the License at # * # * http://www.apache.org/licenses/LICENSE-2.0 # * # * Unless required by applicable law or agreed to in writing, software # * distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # * See the License for the specific language governing permissions and # * limitations under the License. # */ ## Bookie settings # Port that bookie server listen on bookiePort=3181 # Set the network interface that the bookie should listen on. # If not set, the bookie will listen on all interfaces. #listeningInterface=eth0 # Whether the bookie allowed to use a loopback interface as its primary # interface(i.e. the interface it uses to establish its identity)? # By default, loopback interfaces are not allowed as the primary # interface. # Using a loopback interface as the primary interface usually indicates # a configuration error. For example, its fairly common in some VPS setups # to not configure a hostname, or to have the hostname resolve to # 127.0.0.1. If this is the case, then all bookies in the cluster will # establish their identities as 127.0.0.1:3181, and only one will be able # to join the cluster. For VPSs configured like this, you should explicitly # set the listening interface. #allowLoopback=false # Directory Bookkeeper outputs its write ahead log journalDirectory=/tmp/bk-txn # Directory Bookkeeper outputs ledger snapshots # could define multi directories to store snapshots, separated by ',' # For example: # ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data # # Ideally ledger dirs and journal dir are each in a differet device, # which reduce the contention between random i/o and sequential write. # It is possible to run with a single disk, but performance will be significantly lower. ledgerDirectories=/tmp/bk-data # Ledger Manager Class # What kind of ledger manager is used to manage how ledgers are stored, managed # and garbage collected. Try to read 'BookKeeper Internals' for detail info. # ledgerManagerType=flat # Root zookeeper path to store ledger metadata # This parameter is used by zookeeper-based ledger manager as a root znode to # store all ledgers. # zkLedgersRootPath=/ledgers # Max file size of entry logger, in bytes # A new entry log file will be created when the old one reaches the file size limitation # logSizeLimit=2147483648 # Threshold of minor compaction # For those entry log files whose remaining size percentage reaches below # this threshold will be compacted in a minor compaction. # If it is set to less than zero, the minor compaction is disabled. # minorCompactionThreshold=0.2 # Interval to run minor compaction, in seconds # If it is set to less than zero, the minor compaction is disabled. # minorCompactionInterval=3600 # Threshold of major compaction # For those entry log files whose remaining size percentage reaches below # this threshold will be compacted in a major compaction. # Those entry log files whose remaining size percentage is still # higher than the threshold will never be compacted. # If it is set to less than zero, the minor compaction is disabled. # majorCompactionThreshold=0.8 # Interval to run major compaction, in seconds # If it is set to less than zero, the major compaction is disabled. # majorCompactionInterval=86400 # Set the maximum number of entries which can be compacted without flushing. # When compacting, the entries are written to the entrylog and the new offsets # are cached in memory. Once the entrylog is flushed the index is updated with # the new offsets. This parameter controls the number of entries added to the # entrylog before a flush is forced. A higher value for this parameter means # more memory will be used for offsets. Each offset consists of 3 longs. # This parameter should _not_ be modified unless you know what you're doing. # The default is 100,000. #compactionMaxOutstandingRequests=100000 # Set the rate at which compaction will readd entries. The unit is adds per second. #compactionRate=1000 # Max file size of journal file, in mega bytes # A new journal file will be created when the old one reaches the file size limitation # # journalMaxSizeMB=2048 # Max number of old journal file to kept # Keep a number of old journal files would help data recovery in specia case # # journalMaxBackups=5 # How long the interval to trigger next garbage collection, in milliseconds # Since garbage collection is running in background, too frequent gc # will heart performance. It is better to give a higher number of gc # interval if there is enough disk capacity. # gcWaitTime=1000 # How long the interval to flush ledger index pages to disk, in milliseconds # Flushing index files will introduce much random disk I/O. # If separating journal dir and ledger dirs each on different devices, # flushing would not affect performance. But if putting journal dir # and ledger dirs on same device, performance degrade significantly # on too frequent flushing. You can consider increment flush interval # to get better performance, but you need to pay more time on bookie # server restart after failure. # # flushInterval=100 # Interval to watch whether bookie is dead or not, in milliseconds # # bookieDeathWatchInterval=1000 ## zookeeper client settings # A list of one of more servers on which zookeeper is running. # The server list can be comma separated values, for example: # zkServers=zk1:2181,zk2:2181,zk3:2181 zkServers=localhost:2181 # ZooKeeper client session timeout in milliseconds # Bookie server will exit if it received SESSION_EXPIRED because it # was partitioned off from ZooKeeper for more than the session timeout # JVM garbage collection, disk I/O will cause SESSION_EXPIRED. # Increment this value could help avoiding this issue zkTimeout=10000 ## NIO Server settings # This settings is used to enabled/disabled Nagle's algorithm, which is a means of # improving the efficiency of TCP/IP networks by reducing the number of packets # that need to be sent over the network. # If you are sending many small messages, such that more than one can fit in # a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm # can provide better performance. # Default value is true. # # serverTcpNoDelay=true ## ledger cache settings # Max number of ledger index files could be opened in bookie server # If number of ledger index files reaches this limitation, bookie # server started to swap some ledgers from memory to disk. # Too frequent swap will affect performance. You can tune this number # to gain performance according your requirements. # openFileLimit=900 # Size of a index page in ledger cache, in bytes # A larger index page can improve performance writing page to disk, # which is efficent when you have small number of ledgers and these # ledgers have similar number of entries. # If you have large number of ledgers and each ledger has fewer entries, # smaller index page would improve memory usage. # pageSize=8192 # How many index pages provided in ledger cache # If number of index pages reaches this limitation, bookie server # starts to swap some ledgers from memory to disk. You can increment # this value when you found swap became more frequent. But make sure # pageLimit*pageSize should not more than JVM max memory limitation, # otherwise you would got OutOfMemoryException. # In general, incrementing pageLimit, using smaller index page would # gain bettern performance in lager number of ledgers with fewer entries case # If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute # the limitation of number of index pages. # pageLimit=-1 #If all ledger directories configured are full, then support only read requests for clients. #If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted #to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. #By default this will be disabled. #readOnlyModeEnabled=false #For each ledger dir, maximum disk space which can be used. #Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will #be written to that partition. If all ledger dir partions are full, then bookie #will turn to readonly mode if 'readOnlyModeEnabled=true' is set, else it will #shutdown. #Valid values should be in between 0 and 1 (exclusive). #diskUsageThreshold=0.95 #Disk check interval in milli seconds, interval to check the ledger dirs usage. #Default is 10000 #diskCheckInterval=10000 # Interval at which the auditor will do a check of all ledgers in the cluster. # By default this runs once a week. The interval is set in seconds. # To disable the periodic check completely, set this to 0. # Note that periodic checking will put extra load on the cluster, so it should # not be run more frequently than once a day. #auditorPeriodicCheckInterval=604800 # The interval between auditor bookie checks. # The auditor bookie check, checks ledger metadata to see which bookies should # contain entries for each ledger. If a bookie which should contain entries is # unavailable, then the ledger containing that entry is marked for recovery. # Setting this to 0 disabled the periodic check. Bookie checks will still # run when a bookie fails. # The interval is specified in seconds. #auditorPeriodicBookieCheckInterval=86400 bookkeeper-release-4.2.4/bookkeeper-server/conf/bkenv.sh000066400000000000000000000027141244507361200233150ustar00rootroot00000000000000#!/bin/sh # #/** # * Copyright 2007 The Apache Software Foundation # * # * Licensed to the Apache Software Foundation (ASF) under one # * or more contributor license agreements. See the NOTICE file # * distributed with this work for additional information # * regarding copyright ownership. The ASF licenses this file # * to you under the Apache License, Version 2.0 (the # * "License"); you may not use this file except in compliance # * with the License. You may obtain a copy of the License at # * # * http://www.apache.org/licenses/LICENSE-2.0 # * # * Unless required by applicable law or agreed to in writing, software # * distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # * See the License for the specific language governing permissions and # * limitations under the License. # */ # Set JAVA_HOME here to override the environment setting # JAVA_HOME= # default settings for starting bookkeeper # Configuration file of settings used in bookie server # BOOKIE_CONF= # Log4j configuration file # BOOKIE_LOG_CONF= # Logs location # BOOKIE_LOG_DIR= # Extra options to be passed to the jvm # BOOKIE_EXTRA_OPTS= # Add extra paths to the bookkeeper classpath # BOOKIE_EXTRA_CLASSPATH= #Folder where the Bookie server PID file should be stored #BOOKIE_PID_DIR= #Wait time before forcefully kill the Bookie server instance, if the stop is not successful #BOOKIE_STOP_TIMEOUT= bookkeeper-release-4.2.4/bookkeeper-server/conf/log4j.properties000066400000000000000000000054701244507361200250130ustar00rootroot00000000000000# # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # # # Hedwig Logging Configuration # # Format is " (, )+ # DEFAULT: console appender only # Define some default values that can be overridden by system properties bookkeeper.root.logger=WARN,CONSOLE bookkeeper.log.dir=. bookkeeper.log.file=bookkeeper-server.log log4j.rootLogger=${bookkeeper.root.logger} # Example with rolling log file #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE # Example with rolling log file and tracing #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=INFO log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # # Add ROLLINGFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender log4j.appender.ROLLINGFILE.Threshold=INFO log4j.appender.ROLLINGFILE.File=${bookkeeper.log.dir}/${bookkeeper.log.file} log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # Max log file size of 10MB #log4j.appender.ROLLINGFILE.MaxFileSize=10MB # uncomment the next line to limit number of backup files #log4j.appender.ROLLINGFILE.MaxBackupIndex=10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=bookkeeper-trace.log log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n bookkeeper-release-4.2.4/bookkeeper-server/conf/log4j.shell.properties000066400000000000000000000026331244507361200261170ustar00rootroot00000000000000# # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # # # BookieShell configuration # DEFAULT: console appender only # Define some default values that can be overridden by system properties bookkeeper.root.logger=ERROR,CONSOLE log4j.rootLogger=${bookkeeper.root.logger} # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=INFO log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ABSOLUTE} %-5p %m%n log4j.logger.org.apache.zookeeper=ERROR log4j.logger.org.apache.bookkeeper=ERROR log4j.logger.org.apache.bookkeeper.bookie.BookieShell=INFO bookkeeper-release-4.2.4/bookkeeper-server/pom.xml000066400000000000000000000240201244507361200222360ustar00rootroot00000000000000 4.0.0 bookkeeper org.apache.bookkeeper 4.2.4 org.apache.bookkeeper bookkeeper-server bookkeeper-server http://maven.apache.org UTF-8 ${basedir}/lib com.google.protobuf protobuf-java ${protobuf.version} compile com.google.guava guava ${guava.version} junit junit 4.8.1 test org.slf4j slf4j-api 1.6.4 org.slf4j slf4j-log4j12 1.6.4 org.apache.zookeeper zookeeper 3.4.3 compile org.apache.zookeeper zookeeper 3.4.3 test-jar test org.jboss.netty netty 3.2.4.Final compile commons-configuration commons-configuration 1.6 commons-cli commons-cli 1.2 commons-codec commons-codec 1.6 commons-io commons-io 2.1 log4j log4j 1.2.15 javax.mail mail javax.jms jms com.sun.jdmk jmxtools com.sun.jmx jmxri org.apache.bookkeeper bookkeeper-server-compat400 4.0.0 test org.apache.bookkeeper bookkeeper-server org.apache.bookkeeper bookkeeper-server-compat410 4.1.0 test org.apache.bookkeeper bookkeeper-server org.apache.maven.plugins maven-shade-plugin 2.1 package shade true com.google.protobuf:protobuf-java com.google.guava:guava true com.google bk-shade.com.google org.codehaus.mojo license-maven-plugin 1.6 false ${project.basedir} update-pom-license update-file-header package apache_v2 dependency-reduced-pom.xml org.apache.maven.plugins maven-jar-plugin 2.2 test-jar maven-assembly-plugin 2.2.1 ../src/assemble/bin.xml org.apache.rat apache-rat-plugin 0.7 **/DataFormats.java org.codehaus.mojo findbugs-maven-plugin ${basedir}/src/main/resources/findbugsExclude.xml maven-dependency-plugin package copy-dependencies ${project.libdir} runtime maven-clean-plugin 2.5 ${project.libdir} false ${project.basedir} dependency-reduced-pom.xml protobuf maven-antrun-plugin generate-sources default-cli run bookkeeper-release-4.2.4/bookkeeper-server/src/000077500000000000000000000000001244507361200215125ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/000077500000000000000000000000001244507361200224365ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/000077500000000000000000000000001244507361200233575ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/000077500000000000000000000000001244507361200241465ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/000077500000000000000000000000001244507361200253675ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/000077500000000000000000000000001244507361200275155ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/000077500000000000000000000000001244507361200307655ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/Bookie.java000066400000000000000000001344641244507361200330540ustar00rootroot00000000000000/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; import java.io.FilenameFilter; import java.net.InetSocketAddress; import java.net.UnknownHostException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Collections; import java.util.Map; import java.util.HashMap; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.bookie.Journal.JournalScanner; import org.apache.bookkeeper.bookie.LedgerDirsManager.LedgerDirsListener; import org.apache.bookkeeper.bookie.LedgerDirsManager.NoWritableLedgerDirException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.jmx.BKMBeanInfo; import org.apache.bookkeeper.jmx.BKMBeanRegistry; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.IOUtils; import org.apache.bookkeeper.util.MathUtils; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.net.DNS; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.commons.io.FileUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException.NodeExistsException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.Watcher.Event.EventType; import com.google.common.annotations.VisibleForTesting; /** * Implements a bookie. * */ public class Bookie extends Thread { static Logger LOG = LoggerFactory.getLogger(Bookie.class); final File journalDirectory; final ServerConfiguration conf; final SyncThread syncThread; final LedgerManagerFactory ledgerManagerFactory; final LedgerManager ledgerManager; final LedgerStorage ledgerStorage; final Journal journal; final HandleFactory handles; static final long METAENTRY_ID_LEDGER_KEY = -0x1000; static final long METAENTRY_ID_FENCE_KEY = -0x2000; // ZK registration path for this bookie private final String bookieRegistrationPath; private LedgerDirsManager ledgerDirsManager; // ZooKeeper client instance for the Bookie ZooKeeper zk; // Running flag private volatile boolean running = false; // Flag identify whether it is in shutting down progress private volatile boolean shuttingdown = false; private int exitCode = ExitCode.OK; // jmx related beans BookieBean jmxBookieBean; BKMBeanInfo jmxLedgerStorageBean; Map masterKeyCache = Collections.synchronizedMap(new HashMap()); final private String zkBookieRegPath; final private AtomicBoolean readOnly = new AtomicBoolean(false); public static class NoLedgerException extends IOException { private static final long serialVersionUID = 1L; private long ledgerId; public NoLedgerException(long ledgerId) { super("Ledger " + ledgerId + " not found"); this.ledgerId = ledgerId; } public long getLedgerId() { return ledgerId; } } public static class NoEntryException extends IOException { private static final long serialVersionUID = 1L; private long ledgerId; private long entryId; public NoEntryException(long ledgerId, long entryId) { this("Entry " + entryId + " not found in " + ledgerId, ledgerId, entryId); } public NoEntryException(String msg, long ledgerId, long entryId) { super(msg); this.ledgerId = ledgerId; this.entryId = entryId; } public long getLedger() { return ledgerId; } public long getEntry() { return entryId; } } // Write Callback do nothing static class NopWriteCallback implements WriteCallback { @Override public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { if (LOG.isDebugEnabled()) { LOG.debug("Finished writing entry {} @ ledger {} for {} : {}", new Object[] { entryId, ledgerId, addr, rc }); } } } final static Future SUCCESS_FUTURE = new Future() { @Override public boolean cancel(boolean mayInterruptIfRunning) { return false; } @Override public Boolean get() { return true; } @Override public Boolean get(long timeout, TimeUnit unit) { return true; } @Override public boolean isCancelled() { return false; } @Override public boolean isDone() { return true; } }; static class CountDownLatchFuture implements Future { T value = null; volatile boolean done = false; CountDownLatch latch = new CountDownLatch(1); @Override public boolean cancel(boolean mayInterruptIfRunning) { return false; } @Override public T get() throws InterruptedException { latch.await(); return value; } @Override public T get(long timeout, TimeUnit unit) throws InterruptedException { latch.await(timeout, unit); return value; } @Override public boolean isCancelled() { return false; } @Override public boolean isDone() { return done; } void setDone(T value) { this.value = value; done = true; latch.countDown(); } } static class FutureWriteCallback implements WriteCallback { CountDownLatchFuture result = new CountDownLatchFuture(); @Override public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { if (LOG.isDebugEnabled()) { LOG.debug("Finished writing entry {} @ ledger {} for {} : {}", new Object[] { entryId, ledgerId, addr, rc }); } result.setDone(0 == rc); } public Future getResult() { return result; } } /** * SyncThread is a background thread which flushes ledger index pages periodically. * Also it takes responsibility of garbage collecting journal files. * *

* Before flushing, SyncThread first records a log marker {journalId, journalPos} in memory, * which indicates entries before this log marker would be persisted to ledger files. * Then sync thread begins flushing ledger index pages to ledger index files, flush entry * logger to ensure all entries persisted to entry loggers for future reads. *

*

* After all data has been persisted to ledger index files and entry loggers, it is safe * to persist the log marker to disk. If bookie failed after persist log mark, * bookie is able to relay journal entries started from last log mark without losing * any entries. *

*

* Those journal files whose id are less than the log id in last log mark, could be * removed safely after persisting last log mark. We provide a setting to let user keeping * number of old journal files which may be used for manual recovery in critical disaster. *

*/ class SyncThread extends Thread { volatile boolean running = true; // flag to ensure sync thread will not be interrupted during flush final AtomicBoolean flushing = new AtomicBoolean(false); // make flush interval as a parameter final int flushInterval; public SyncThread(ServerConfiguration conf) { super("SyncThread"); flushInterval = conf.getFlushInterval(); LOG.debug("Flush Interval : {}", flushInterval); } private Object suspensionLock = new Object(); private boolean suspended = false; /** * Suspend sync thread. (for testing) */ @VisibleForTesting public void suspendSync() { synchronized(suspensionLock) { suspended = true; } } /** * Resume sync thread. (for testing) */ @VisibleForTesting public void resumeSync() { synchronized(suspensionLock) { suspended = false; suspensionLock.notify(); } } @Override public void run() { try { while (running) { synchronized (this) { try { wait(flushInterval); if (!ledgerStorage.isFlushRequired()) { continue; } } catch (InterruptedException e) { Thread.currentThread().interrupt(); continue; } } synchronized (suspensionLock) { while (suspended) { suspensionLock.wait(); } } // try to mark flushing flag to make sure it would not be interrupted // by shutdown during flushing. otherwise it will receive // ClosedByInterruptException which may cause index file & entry logger // closed and corrupted. if (!flushing.compareAndSet(false, true)) { // set flushing flag failed, means flushing is true now // indicates another thread wants to interrupt sync thread to exit break; } // journal mark log journal.markLog(); boolean flushFailed = false; try { ledgerStorage.flush(); } catch (NoWritableLedgerDirException e) { flushFailed = true; flushing.set(false); transitionToReadOnlyMode(); } catch (IOException e) { LOG.error("Exception flushing Ledger", e); flushFailed = true; } // if flush failed, we should not roll last mark, otherwise we would // have some ledgers are not flushed and their journal entries were lost if (!flushFailed) { try { journal.rollLog(); journal.gcJournals(); } catch (NoWritableLedgerDirException e) { flushing.set(false); transitionToReadOnlyMode(); } } // clear flushing flag flushing.set(false); } } catch (Throwable t) { LOG.error("Exception in SyncThread", t); flushing.set(false); triggerBookieShutdown(ExitCode.BOOKIE_EXCEPTION); } } // shutdown sync thread void shutdown() throws InterruptedException { running = false; if (flushing.compareAndSet(false, true)) { // if setting flushing flag succeed, means syncThread is not flushing now // it is safe to interrupt itself now this.interrupt(); } this.join(); } } public static void checkDirectoryStructure(File dir) throws IOException { if (!dir.exists()) { File parent = dir.getParentFile(); File preV3versionFile = new File(dir.getParent(), BookKeeperConstants.VERSION_FILENAME); final AtomicBoolean oldDataExists = new AtomicBoolean(false); parent.list(new FilenameFilter() { public boolean accept(File dir, String name) { if (name.endsWith(".txn") || name.endsWith(".idx") || name.endsWith(".log")) { oldDataExists.set(true); } return true; } }); if (preV3versionFile.exists() || oldDataExists.get()) { String err = "Directory layout version is less than 3, upgrade needed"; LOG.error(err); throw new IOException(err); } if (!dir.mkdirs()) { String err = "Unable to create directory " + dir; LOG.error(err); throw new IOException(err); } } } /** * Check that the environment for the bookie is correct. * This means that the configuration has stayed the same as the * first run and the filesystem structure is up to date. */ private void checkEnvironment(ZooKeeper zk) throws BookieException, IOException { if (zk == null) { // exists only for testing, just make sure directories are correct checkDirectoryStructure(journalDirectory); for (File dir : ledgerDirsManager.getAllLedgerDirs()) { checkDirectoryStructure(dir); } return; } try { String instanceId = getInstanceId(zk); boolean newEnv = false; Cookie masterCookie = Cookie.generateCookie(conf); if (null != instanceId) { masterCookie.setInstanceId(instanceId); } try { Cookie zkCookie = Cookie.readFromZooKeeper(zk, conf); masterCookie.verify(zkCookie); } catch (KeeperException.NoNodeException nne) { newEnv = true; } List missedCookieDirs = new ArrayList(); checkDirectoryStructure(journalDirectory); // try to read cookie from journal directory try { Cookie journalCookie = Cookie.readFromDirectory(journalDirectory); journalCookie.verify(masterCookie); } catch (FileNotFoundException fnf) { missedCookieDirs.add(journalDirectory); } for (File dir : ledgerDirsManager.getAllLedgerDirs()) { checkDirectoryStructure(dir); try { Cookie c = Cookie.readFromDirectory(dir); c.verify(masterCookie); } catch (FileNotFoundException fnf) { missedCookieDirs.add(dir); } } if (!newEnv && missedCookieDirs.size() > 0){ LOG.error("Cookie exists in zookeeper, but not in all local directories. " + " Directories missing cookie file are " + missedCookieDirs); throw new BookieException.InvalidCookieException(); } if (newEnv) { if (missedCookieDirs.size() > 0) { LOG.debug("Directories missing cookie file are {}", missedCookieDirs); masterCookie.writeToDirectory(journalDirectory); for (File dir : ledgerDirsManager.getAllLedgerDirs()) { masterCookie.writeToDirectory(dir); } } masterCookie.writeToZooKeeper(zk, conf); } } catch (KeeperException ke) { LOG.error("Couldn't access cookie in zookeeper", ke); throw new BookieException.InvalidCookieException(ke); } catch (UnknownHostException uhe) { LOG.error("Couldn't check cookies, networking is broken", uhe); throw new BookieException.InvalidCookieException(uhe); } catch (IOException ioe) { LOG.error("Error accessing cookie on disks", ioe); throw new BookieException.InvalidCookieException(ioe); } catch (InterruptedException ie) { LOG.error("Thread interrupted while checking cookies, exiting", ie); throw new BookieException.InvalidCookieException(ie); } } /** * Return the configured address of the bookie. */ public static InetSocketAddress getBookieAddress(ServerConfiguration conf) throws UnknownHostException { String iface = conf.getListeningInterface(); if (iface == null) { iface = "default"; } InetSocketAddress addr = new InetSocketAddress( DNS.getDefaultHost(iface), conf.getBookiePort()); if (addr.getAddress().isLoopbackAddress() && !conf.getAllowLoopback()) { throw new UnknownHostException("Trying to listen on loopback address, " + addr + " but this is forbidden by default " + "(see ServerConfiguration#getAllowLoopback())"); } return addr; } private String getInstanceId(ZooKeeper zk) throws KeeperException, InterruptedException { String instanceId = null; if (zk.exists(conf.getZkLedgersRootPath(), null) == null) { LOG.error("BookKeeper metadata doesn't exist in zookeeper. " + "Has the cluster been initialized? " + "Try running bin/bookkeeper shell metaformat"); throw new KeeperException.NoNodeException("BookKeeper metadata"); } try { byte[] data = zk.getData(conf.getZkLedgersRootPath() + "/" + BookKeeperConstants.INSTANCEID, false, null); instanceId = new String(data); } catch (KeeperException.NoNodeException e) { LOG.info("INSTANCEID not exists in zookeeper. Not considering it for data verification"); } return instanceId; } public LedgerDirsManager getLedgerDirsManager() { return ledgerDirsManager; } public static File getCurrentDirectory(File dir) { return new File(dir, BookKeeperConstants.CURRENT_DIR); } public static File[] getCurrentDirectories(File[] dirs) { File[] currentDirs = new File[dirs.length]; for (int i = 0; i < dirs.length; i++) { currentDirs[i] = getCurrentDirectory(dirs[i]); } return currentDirs; } public Bookie(ServerConfiguration conf) throws IOException, KeeperException, InterruptedException, BookieException { super("Bookie-" + conf.getBookiePort()); this.bookieRegistrationPath = conf.getZkAvailableBookiesPath() + "/"; this.conf = conf; this.journalDirectory = getCurrentDirectory(conf.getJournalDir()); this.ledgerDirsManager = new LedgerDirsManager(conf); // instantiate zookeeper client to initialize ledger manager this.zk = instantiateZookeeperClient(conf); checkEnvironment(this.zk); ledgerManagerFactory = LedgerManagerFactory.newLedgerManagerFactory(conf, this.zk); LOG.info("instantiate ledger manager {}", ledgerManagerFactory.getClass().getName()); ledgerManager = ledgerManagerFactory.newLedgerManager(); syncThread = new SyncThread(conf); ledgerStorage = new InterleavedLedgerStorage(conf, ledgerManager, ledgerDirsManager); handles = new HandleFactoryImpl(ledgerStorage); // instantiate the journal journal = new Journal(conf, ledgerDirsManager); // ZK ephemeral node for this Bookie. zkBookieRegPath = this.bookieRegistrationPath + getMyId(); } private String getMyId() throws UnknownHostException { return StringUtils.addrToString(Bookie.getBookieAddress(conf)); } void readJournal() throws IOException, BookieException { journal.replay(new JournalScanner() { @Override public void process(int journalVersion, long offset, ByteBuffer recBuff) throws IOException { long ledgerId = recBuff.getLong(); long entryId = recBuff.getLong(); try { LOG.debug("Replay journal - ledger id : {}, entry id : {}.", ledgerId, entryId); if (entryId == METAENTRY_ID_LEDGER_KEY) { if (journalVersion >= 3) { int masterKeyLen = recBuff.getInt(); byte[] masterKey = new byte[masterKeyLen]; recBuff.get(masterKey); masterKeyCache.put(ledgerId, masterKey); } else { throw new IOException("Invalid journal. Contains journalKey " + " but layout version (" + journalVersion + ") is too old to hold this"); } } else if (entryId == METAENTRY_ID_FENCE_KEY) { if (journalVersion >= 4) { byte[] key = masterKeyCache.get(ledgerId); if (key == null) { key = ledgerStorage.readMasterKey(ledgerId); } LedgerDescriptor handle = handles.getHandle(ledgerId, key); handle.setFenced(); } else { throw new IOException("Invalid journal. Contains fenceKey " + " but layout version (" + journalVersion + ") is too old to hold this"); } } else { byte[] key = masterKeyCache.get(ledgerId); if (key == null) { key = ledgerStorage.readMasterKey(ledgerId); } LedgerDescriptor handle = handles.getHandle(ledgerId, key); recBuff.rewind(); handle.addEntry(recBuff); } } catch (NoLedgerException nsle) { LOG.debug("Skip replaying entries of ledger {} since it was deleted.", ledgerId); } catch (BookieException be) { throw new IOException(be); } } }); } synchronized public void start() { setDaemon(true); LOG.debug("I'm starting a bookie with journal directory {}", journalDirectory.getName()); // replay journals try { readJournal(); } catch (IOException ioe) { LOG.error("Exception while replaying journals, shutting down", ioe); shutdown(ExitCode.BOOKIE_EXCEPTION); return; } catch (BookieException be) { LOG.error("Exception while replaying journals, shutting down", be); shutdown(ExitCode.BOOKIE_EXCEPTION); return; } LOG.info("Finished reading journal, starting bookie"); // start bookie thread super.start(); ledgerDirsManager.addLedgerDirsListener(getLedgerDirsListener()); //Start DiskChecker thread ledgerDirsManager.start(); ledgerStorage.start(); syncThread.start(); // set running here. // since bookie server use running as a flag to tell bookie server whether it is alive // if setting it in bookie thread, the watcher might run before bookie thread. running = true; try { registerBookie(conf); } catch (IOException e) { LOG.error("Couldn't register bookie with zookeeper, shutting down", e); shutdown(ExitCode.ZK_REG_FAIL); } } /* * Get the DiskFailure listener for the bookie */ private LedgerDirsListener getLedgerDirsListener() { return new LedgerDirsListener() { @Override public void diskFull(File disk) { // Nothing needs to be handled here. } @Override public void diskFailed(File disk) { // Shutdown the bookie on disk failure. triggerBookieShutdown(ExitCode.BOOKIE_EXCEPTION); } @Override public void allDisksFull() { // Transition to readOnly mode on all disks full transitionToReadOnlyMode(); } @Override public void fatalError() { LOG.error("Fatal error reported by ledgerDirsManager"); triggerBookieShutdown(ExitCode.BOOKIE_EXCEPTION); } }; } /** * Register jmx with parent * * @param parent parent bk mbean info */ public void registerJMX(BKMBeanInfo parent) { try { jmxBookieBean = new BookieBean(this); BKMBeanRegistry.getInstance().register(jmxBookieBean, parent); try { jmxLedgerStorageBean = this.ledgerStorage.getJMXBean(); BKMBeanRegistry.getInstance().register(jmxLedgerStorageBean, jmxBookieBean); } catch (Exception e) { LOG.warn("Failed to register with JMX for ledger cache", e); jmxLedgerStorageBean = null; } } catch (Exception e) { LOG.warn("Failed to register with JMX", e); jmxBookieBean = null; } } /** * Unregister jmx */ public void unregisterJMX() { try { if (jmxLedgerStorageBean != null) { BKMBeanRegistry.getInstance().unregister(jmxLedgerStorageBean); } } catch (Exception e) { LOG.warn("Failed to unregister with JMX", e); } try { if (jmxBookieBean != null) { BKMBeanRegistry.getInstance().unregister(jmxBookieBean); } } catch (Exception e) { LOG.warn("Failed to unregister with JMX", e); } jmxBookieBean = null; jmxLedgerStorageBean = null; } /** * Instantiate the ZooKeeper client for the Bookie. */ private ZooKeeper instantiateZookeeperClient(ServerConfiguration conf) throws IOException, InterruptedException, KeeperException { if (conf.getZkServers() == null) { LOG.warn("No ZK servers passed to Bookie constructor so BookKeeper clients won't know about this server!"); return null; } // Create the ZooKeeper client instance return newZookeeper(conf.getZkServers(), conf.getZkTimeout()); } /** * Register as an available bookie */ protected void registerBookie(ServerConfiguration conf) throws IOException { if (null == zk) { // zookeeper instance is null, means not register itself to zk return; } // ZK ephemeral node for this Bookie. String zkBookieRegPath = this.bookieRegistrationPath + StringUtils.addrToString(getBookieAddress(conf)); final CountDownLatch prevNodeLatch = new CountDownLatch(1); try{ Watcher zkPrevRegNodewatcher = new Watcher() { @Override public void process(WatchedEvent event) { // Check for prev znode deletion. Connection expiration is // not handling, since bookie has logic to shutdown. if (EventType.NodeDeleted == event.getType()) { prevNodeLatch.countDown(); } } }; if (null != zk.exists(zkBookieRegPath, zkPrevRegNodewatcher)) { LOG.info("Previous bookie registration znode: " + zkBookieRegPath + " exists, so waiting zk sessiontimeout: " + conf.getZkTimeout() + "ms for znode deletion"); // waiting for the previous bookie reg znode deletion if (!prevNodeLatch.await(conf.getZkTimeout(), TimeUnit.MILLISECONDS)) { throw new KeeperException.NodeExistsException( zkBookieRegPath); } } // Create the ZK ephemeral node for this Bookie. zk.create(zkBookieRegPath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL); } catch (KeeperException ke) { LOG.error("ZK exception registering ephemeral Znode for Bookie!", ke); // Throw an IOException back up. This will cause the Bookie // constructor to error out. Alternatively, we could do a System // exit here as this is a fatal error. throw new IOException(ke); } catch (InterruptedException ie) { LOG.error("ZK exception registering ephemeral Znode for Bookie!", ie); // Throw an IOException back up. This will cause the Bookie // constructor to error out. Alternatively, we could do a System // exit here as this is a fatal error. throw new IOException(ie); } } /* * Transition the bookie to readOnly mode */ @VisibleForTesting public void transitionToReadOnlyMode() { if (shuttingdown == true) { return; } if (!readOnly.compareAndSet(false, true)) { return; } if (!conf.isReadOnlyModeEnabled()) { LOG.warn("ReadOnly mode is not enabled. " + "Can be enabled by configuring " + "'readOnlyModeEnabled=true' in configuration." + "Shutting down bookie"); triggerBookieShutdown(ExitCode.BOOKIE_EXCEPTION); return; } LOG.info("Transitioning Bookie to ReadOnly mode," + " and will serve only read requests from clients!"); try { if (null == zk.exists(this.bookieRegistrationPath + BookKeeperConstants.READONLY, false)) { try { zk.create(this.bookieRegistrationPath + BookKeeperConstants.READONLY, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (NodeExistsException e) { // this node is just now created by someone. } } // Create the readonly node zk.create(this.bookieRegistrationPath + BookKeeperConstants.READONLY + "/" + getMyId(), new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL); // Clear the current registered node zk.delete(zkBookieRegPath, -1); } catch (IOException e) { LOG.error("Error in transition to ReadOnly Mode." + " Shutting down", e); triggerBookieShutdown(ExitCode.BOOKIE_EXCEPTION); return; } catch (KeeperException e) { LOG.error("Error in transition to ReadOnly Mode." + " Shutting down", e); triggerBookieShutdown(ExitCode.BOOKIE_EXCEPTION); return; } catch (InterruptedException e) { Thread.currentThread().interrupt(); LOG.warn("Interrupted Exception while transitioning to ReadOnly Mode."); return; } } /* * Check whether Bookie is writable */ public boolean isReadOnly() { return readOnly.get(); } /** * Create a new zookeeper client to zk cluster. * *

* Bookie Server just used zk client when syncing ledgers for garbage collection. * So when zk client is expired, it means this bookie server is not available in * bookie server list. The bookie client will be notified for its expiration. No * more bookie request will be sent to this server. So it's better to exit when zk * expired. *

*

* Since there are lots of bk operations cached in queue, so we wait for all the operations * are processed and quit. It is done by calling shutdown. *

* * @param zkServers the quorum list of zk servers * @param sessionTimeout session timeout of zk connection * * @return zk client instance */ private ZooKeeper newZookeeper(final String zkServers, final int sessionTimeout) throws IOException, InterruptedException, KeeperException { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(conf.getZkTimeout()) { @Override public void process(WatchedEvent event) { // Check for expired connection. if (event.getState().equals(Watcher.Event.KeeperState.Expired)) { LOG.error("ZK client connection to the ZK server has expired!"); shutdown(ExitCode.ZK_EXPIRED); } else { super.process(event); } } }; return ZkUtils.createConnectedZookeeperClient(zkServers, w); } public boolean isRunning() { return running; } @Override public void run() { // bookie thread wait for journal thread try { // start journal journal.start(); // wait until journal quits journal.join(); } catch (InterruptedException ie) { } // if the journal thread quits due to shutting down, it is ok if (!shuttingdown) { // some error found in journal thread and it quits // following add operations to it would hang unit client timeout // so we should let bookie server exists LOG.error("Journal manager quits unexpectedly."); triggerBookieShutdown(ExitCode.BOOKIE_EXCEPTION); } } // Triggering the Bookie shutdown in its own thread, // because shutdown can be called from sync thread which would be // interrupted by shutdown call. AtomicBoolean shutdownTriggered = new AtomicBoolean(false); void triggerBookieShutdown(final int exitCode) { if (!shutdownTriggered.compareAndSet(false, true)) { return; } LOG.info("Triggering shutdown of Bookie-{} with exitCode {}", conf.getBookiePort(), exitCode); new Thread("BookieShutdownTrigger") { public void run() { Bookie.this.shutdown(exitCode); } }.start(); } // provided a public shutdown method for other caller // to shut down bookie gracefully public int shutdown() { return shutdown(ExitCode.OK); } // internal shutdown method to let shutdown bookie gracefully // when encountering exception synchronized int shutdown(int exitCode) { try { if (running) { // avoid shutdown twice // the exitCode only set when first shutdown usually due to exception found LOG.info("Shutting down Bookie-{} with exitCode {}", conf.getBookiePort(), exitCode); this.exitCode = exitCode; // mark bookie as in shutting down progress shuttingdown = true; // Shutdown journal journal.shutdown(); this.join(); syncThread.shutdown(); // Shutdown the EntryLogger which has the GarbageCollector Thread running ledgerStorage.shutdown(); // close Ledger Manager try { ledgerManager.close(); ledgerManagerFactory.uninitialize(); } catch (IOException ie) { LOG.error("Failed to close active ledger manager : ", ie); } //Shutdown disk checker ledgerDirsManager.shutdown(); // Shutdown the ZK client if(zk != null) zk.close(); // setting running to false here, so watch thread // in bookie server know it only after bookie shut down running = false; } } catch (InterruptedException ie) { LOG.error("Interrupted during shutting down bookie : ", ie); } return this.exitCode; } /** * Retrieve the ledger descriptor for the ledger which entry should be added to. * The LedgerDescriptor returned from this method should be eventually freed with * #putHandle(). * * @throws BookieException if masterKey does not match the master key of the ledger */ private LedgerDescriptor getLedgerForEntry(ByteBuffer entry, byte[] masterKey) throws IOException, BookieException { long ledgerId = entry.getLong(); LedgerDescriptor l = handles.getHandle(ledgerId, masterKey); if (!masterKeyCache.containsKey(ledgerId)) { // new handle, we should add the key to journal ensure we can rebuild ByteBuffer bb = ByteBuffer.allocate(8 + 8 + 4 + masterKey.length); bb.putLong(ledgerId); bb.putLong(METAENTRY_ID_LEDGER_KEY); bb.putInt(masterKey.length); bb.put(masterKey); bb.flip(); journal.logAddEntry(bb, new NopWriteCallback(), null); masterKeyCache.put(ledgerId, masterKey); } return l; } protected void addEntryByLedgerId(long ledgerId, ByteBuffer entry) throws IOException, BookieException { byte[] key = ledgerStorage.readMasterKey(ledgerId); LedgerDescriptor handle = handles.getHandle(ledgerId, key); handle.addEntry(entry); } /** * Add an entry to a ledger as specified by handle. */ private void addEntryInternal(LedgerDescriptor handle, ByteBuffer entry, WriteCallback cb, Object ctx) throws IOException, BookieException { long ledgerId = handle.getLedgerId(); entry.rewind(); long entryId = handle.addEntry(entry); entry.rewind(); LOG.trace("Adding {}@{}", entryId, ledgerId); journal.logAddEntry(entry, cb, ctx); } /** * Add entry to a ledger, even if the ledger has previous been fenced. This should only * happen in bookie recovery or ledger recovery cases, where entries are being replicates * so that they exist on a quorum of bookies. The corresponding client side call for this * is not exposed to users. */ public void recoveryAddEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { try { LedgerDescriptor handle = getLedgerForEntry(entry, masterKey); synchronized (handle) { addEntryInternal(handle, entry, cb, ctx); } } catch (NoWritableLedgerDirException e) { transitionToReadOnlyMode(); throw new IOException(e); } } /** * Add entry to a ledger. * @throws BookieException.LedgerFencedException if the ledger is fenced */ public void addEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { try { LedgerDescriptor handle = getLedgerForEntry(entry, masterKey); synchronized (handle) { if (handle.isFenced()) { throw BookieException .create(BookieException.Code.LedgerFencedException); } addEntryInternal(handle, entry, cb, ctx); } } catch (NoWritableLedgerDirException e) { transitionToReadOnlyMode(); throw new IOException(e); } } /** * Fences a ledger. From this point on, clients will be unable to * write to this ledger. Only recoveryAddEntry will be * able to add entries to the ledger. * This method is idempotent. Once a ledger is fenced, it can * never be unfenced. Fencing a fenced ledger has no effect. */ public Future fenceLedger(long ledgerId, byte[] masterKey) throws IOException, BookieException { LedgerDescriptor handle = handles.getHandle(ledgerId, masterKey); boolean success; synchronized (handle) { success = handle.setFenced(); } if (success) { // fenced first time, we should add the key to journal ensure we can rebuild ByteBuffer bb = ByteBuffer.allocate(8 + 8); bb.putLong(ledgerId); bb.putLong(METAENTRY_ID_FENCE_KEY); bb.flip(); FutureWriteCallback fwc = new FutureWriteCallback(); LOG.debug("record fenced state for ledger {} in journal.", ledgerId); journal.logAddEntry(bb, fwc, null); return fwc.getResult(); } else { // already fenced return SUCCESS_FUTURE; } } public ByteBuffer readEntry(long ledgerId, long entryId) throws IOException, NoLedgerException { LedgerDescriptor handle = handles.getReadOnlyHandle(ledgerId); LOG.trace("Reading {}@{}", entryId, ledgerId); return handle.readEntry(entryId); } // The rest of the code is test stuff static class CounterCallback implements WriteCallback { int count; synchronized public void writeComplete(int rc, long l, long e, InetSocketAddress addr, Object ctx) { count--; if (count == 0) { notifyAll(); } } synchronized public void incCount() { count++; } synchronized public void waitZero() throws InterruptedException { while (count > 0) { wait(); } } } /** * Format the bookie server data * * @param conf * ServerConfiguration * @param isInteractive * Whether format should ask prompt for confirmation if old data * exists or not. * @param force * If non interactive and force is true, then old data will be * removed without confirm prompt. * @return Returns true if the format is success else returns false */ public static boolean format(ServerConfiguration conf, boolean isInteractive, boolean force) { File journalDir = conf.getJournalDir(); if (journalDir.exists() && journalDir.isDirectory() && journalDir.list().length != 0) { try { boolean confirm = false; if (!isInteractive) { // If non interactive and force is set, then delete old // data. if (force) { confirm = true; } else { confirm = false; } } else { confirm = IOUtils .confirmPrompt("Are you sure to format Bookie data..?"); } if (!confirm) { LOG.error("Bookie format aborted!!"); return false; } } catch (IOException e) { LOG.error("Error during bookie format", e); return false; } } if (!cleanDir(journalDir)) { LOG.error("Formatting journal directory failed"); return false; } File[] ledgerDirs = conf.getLedgerDirs(); for (File dir : ledgerDirs) { if (!cleanDir(dir)) { LOG.error("Formatting ledger directory " + dir + " failed"); return false; } } LOG.info("Bookie format completed successfully"); return true; } private static boolean cleanDir(File dir) { if (dir.exists()) { for (File child : dir.listFiles()) { boolean delete = FileUtils.deleteQuietly(child); if (!delete) { LOG.error("Not able to delete " + child); return false; } } } else if (!dir.mkdirs()) { LOG.error("Not able to create the directory " + dir); return false; } return true; } /** * @param args * @throws IOException * @throws InterruptedException */ public static void main(String[] args) throws IOException, InterruptedException, BookieException, KeeperException { Bookie b = new Bookie(new ServerConfiguration()); b.start(); CounterCallback cb = new CounterCallback(); long start = MathUtils.now(); for (int i = 0; i < 100000; i++) { ByteBuffer buff = ByteBuffer.allocate(1024); buff.putLong(1); buff.putLong(i); buff.limit(1024); buff.position(0); cb.incCount(); b.addEntry(buff, cb, null, new byte[0]); } cb.waitZero(); long end = MathUtils.now(); System.out.println("Took " + (end-start) + "ms"); } /** * Returns exit code - cause of failure * * @return {@link ExitCode} */ public int getExitCode() { return exitCode; } } BookieBean.java000066400000000000000000000025101244507361200335450ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.bookie; import java.io.File; import org.apache.bookkeeper.jmx.BKMBeanInfo; /** * Bookie Bean */ public class BookieBean implements BookieMXBean, BKMBeanInfo { protected Bookie bk; public BookieBean(Bookie bk) { this.bk = bk; } @Override public String getName() { return "Bookie"; } @Override public boolean isHidden() { return false; } @Override public int getQueueLength() { return bk.journal.getJournalQueueLength(); } } BookieException.java000066400000000000000000000106411244507361200346420ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookiepackage org.apache.bookkeeper.bookie; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.lang.Exception; @SuppressWarnings("serial") public abstract class BookieException extends Exception { private int code; public BookieException(int code) { this.code = code; } public BookieException(int code, Throwable t) { super(t); } public BookieException(int code, String reason) { super(reason); } public static BookieException create(int code) { switch(code) { case Code.UnauthorizedAccessException: return new BookieUnauthorizedAccessException(); case Code.LedgerFencedException: return new LedgerFencedException(); case Code.InvalidCookieException: return new InvalidCookieException(); case Code.UpgradeException: return new UpgradeException(); default: return new BookieIllegalOpException(); } } public interface Code { int OK = 0; int UnauthorizedAccessException = -1; int IllegalOpException = -100; int LedgerFencedException = -101; int InvalidCookieException = -102; int UpgradeException = -103; } public void setCode(int code) { this.code = code; } public int getCode() { return this.code; } public String getMessage(int code) { String err = "Invalid operation"; switch(code) { case Code.OK: err = "No problem"; break; case Code.UnauthorizedAccessException: err = "Error while reading ledger"; break; case Code.LedgerFencedException: err = "Ledger has been fenced; No more entries can be added"; break; case Code.InvalidCookieException: err = "Invalid environment cookie found"; break; case Code.UpgradeException: err = "Error performing an upgrade operation "; break; } String reason = super.getMessage(); if (reason == null) { if (super.getCause() != null) { reason = super.getCause().getMessage(); } } if (reason == null) { return err; } else { return String.format("%s [%s]", err, reason); } } public static class BookieUnauthorizedAccessException extends BookieException { public BookieUnauthorizedAccessException() { super(Code.UnauthorizedAccessException); } } public static class BookieIllegalOpException extends BookieException { public BookieIllegalOpException() { super(Code.UnauthorizedAccessException); } } public static class LedgerFencedException extends BookieException { public LedgerFencedException() { super(Code.LedgerFencedException); } } public static class InvalidCookieException extends BookieException { public InvalidCookieException() { this(""); } public InvalidCookieException(String reason) { super(Code.InvalidCookieException, reason); } public InvalidCookieException(Throwable cause) { super(Code.InvalidCookieException, cause); } } public static class UpgradeException extends BookieException { public UpgradeException() { super(Code.UpgradeException); } public UpgradeException(Throwable cause) { super(Code.UpgradeException, cause); } public UpgradeException(String reason) { super(Code.UpgradeException, reason); } } } BookieMXBean.java000066400000000000000000000017661244507361200340260ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.bookie; import java.io.File; /** * Bookie MBean */ public interface BookieMXBean { /** * @return log entry queue length */ public int getQueueLength(); } BookieShell.java000066400000000000000000001265511244507361200337630ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.Formatter; import java.util.HashMap; import java.util.Map; import org.apache.zookeeper.ZooKeeper; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import java.util.List; import java.util.ArrayList; import java.util.Iterator; import java.util.Collections; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.replication.AuditorElector; import org.apache.bookkeeper.bookie.EntryLogger.EntryLogScanner; import org.apache.bookkeeper.bookie.Journal.JournalScanner; import org.apache.bookkeeper.bookie.Journal.LastLogMark; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerManager.LedgerRangeIterator; import org.apache.bookkeeper.meta.LedgerManager.LedgerRange; import org.apache.bookkeeper.util.EntryFormatter; import org.apache.bookkeeper.util.Tool; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import com.google.common.util.concurrent.AbstractFuture; import static com.google.common.base.Charsets.UTF_8; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.CompositeConfiguration; import org.apache.commons.configuration.PropertiesConfiguration; import org.apache.commons.cli.BasicParser; import org.apache.commons.cli.MissingArgumentException; import org.apache.commons.cli.Options; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.ParseException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Bookie Shell is to provide utilities for users to administer a bookkeeper cluster. */ public class BookieShell implements Tool { static final Logger LOG = LoggerFactory.getLogger(BookieShell.class); static final String ENTRY_FORMATTER_CLASS = "entryFormatterClass"; static final String CMD_METAFORMAT = "metaformat"; static final String CMD_BOOKIEFORMAT = "bookieformat"; static final String CMD_RECOVER = "recover"; static final String CMD_LEDGER = "ledger"; static final String CMD_LISTLEDGERS = "listledgers"; static final String CMD_LEDGERMETADATA = "ledgermetadata"; static final String CMD_LISTUNDERREPLICATED = "listunderreplicated"; static final String CMD_WHOISAUDITOR = "whoisauditor"; static final String CMD_SIMPLETEST = "simpletest"; static final String CMD_READLOG = "readlog"; static final String CMD_READJOURNAL = "readjournal"; static final String CMD_LASTMARK = "lastmark"; static final String CMD_AUTORECOVERY = "autorecovery"; static final String CMD_HELP = "help"; final ServerConfiguration bkConf = new ServerConfiguration(); File[] ledgerDirectories; File journalDirectory; EntryLogger entryLogger = null; Journal journal = null; EntryFormatter formatter; int pageSize; int entriesPerPage; interface Command { public int runCmd(String[] args) throws Exception; public void printUsage(); } abstract class MyCommand implements Command { abstract Options getOptions(); abstract String getDescription(); abstract String getUsage(); abstract int runCmd(CommandLine cmdLine) throws Exception; String cmdName; MyCommand(String cmdName) { this.cmdName = cmdName; } @Override public int runCmd(String[] args) throws Exception { try { BasicParser parser = new BasicParser(); CommandLine cmdLine = parser.parse(getOptions(), args); return runCmd(cmdLine); } catch (ParseException e) { LOG.error("Error parsing command line arguments : ", e); printUsage(); return -1; } } @Override public void printUsage() { HelpFormatter hf = new HelpFormatter(); System.err.println(cmdName + ": " + getDescription()); hf.printHelp(getUsage(), getOptions()); } } /** * Format the bookkeeper metadata present in zookeeper */ class MetaFormatCmd extends MyCommand { Options opts = new Options(); MetaFormatCmd() { super(CMD_METAFORMAT); opts.addOption("n", "nonInteractive", false, "Whether to confirm if old data exists..?"); opts.addOption("f", "force", false, "If [nonInteractive] is specified, then whether" + " to force delete the old data without prompt."); } @Override Options getOptions() { return opts; } @Override String getDescription() { return "Format bookkeeper metadata in zookeeper"; } @Override String getUsage() { return "metaformat [-nonInteractive] [-force]"; } @Override int runCmd(CommandLine cmdLine) throws Exception { boolean interactive = (!cmdLine.hasOption("n")); boolean force = cmdLine.hasOption("f"); ClientConfiguration adminConf = new ClientConfiguration(bkConf); boolean result = BookKeeperAdmin.format(adminConf, interactive, force); return (result) ? 0 : 1; } } /** * Formats the local data present in current bookie server */ class BookieFormatCmd extends MyCommand { Options opts = new Options(); public BookieFormatCmd() { super(CMD_BOOKIEFORMAT); opts.addOption("n", "nonInteractive", false, "Whether to confirm if old data exists..?"); opts.addOption("f", "force", false, "If [nonInteractive] is specified, then whether" + " to force delete the old data without prompt..?"); } @Override Options getOptions() { return opts; } @Override String getDescription() { return "Format the current server contents"; } @Override String getUsage() { return "bookieformat [-nonInteractive] [-force]"; } @Override int runCmd(CommandLine cmdLine) throws Exception { boolean interactive = (!cmdLine.hasOption("n")); boolean force = cmdLine.hasOption("f"); ServerConfiguration conf = new ServerConfiguration(bkConf); boolean result = Bookie.format(conf, interactive, force); return (result) ? 0 : 1; } } /** * Recover command for ledger data recovery for failed bookie */ class RecoverCmd extends MyCommand { Options opts = new Options(); public RecoverCmd() { super(CMD_RECOVER); } @Override Options getOptions() { return opts; } @Override String getDescription() { return "Recover the ledger data for failed bookie"; } @Override String getUsage() { return "recover [bookieDest]"; } @Override int runCmd(CommandLine cmdLine) throws Exception { String[] args = cmdLine.getArgs(); if (args.length < 1) { throw new MissingArgumentException( "'bookieSrc' argument required"); } ClientConfiguration adminConf = new ClientConfiguration(bkConf); BookKeeperAdmin admin = new BookKeeperAdmin(adminConf); try { return bkRecovery(admin, args); } finally { if (null != admin) { admin.close(); } } } private int bkRecovery(BookKeeperAdmin bkAdmin, String[] args) throws InterruptedException, BKException { final String bookieSrcString[] = args[0].split(":"); if (bookieSrcString.length != 2) { System.err.println("BookieSrc inputted has invalid format" + "(host:port expected): " + args[0]); return -1; } final InetSocketAddress bookieSrc = new InetSocketAddress( bookieSrcString[0], Integer.parseInt(bookieSrcString[1])); InetSocketAddress bookieDest = null; if (args.length >= 2) { final String bookieDestString[] = args[1].split(":"); if (bookieDestString.length < 2) { System.err.println("BookieDest inputted has invalid format" + "(host:port expected): " + args[1]); return -1; } bookieDest = new InetSocketAddress(bookieDestString[0], Integer.parseInt(bookieDestString[1])); } bkAdmin.recoverBookieData(bookieSrc, bookieDest); return 0; } } /** * Ledger Command Handles ledger related operations */ class LedgerCmd extends MyCommand { Options lOpts = new Options(); LedgerCmd() { super(CMD_LEDGER); lOpts.addOption("m", "meta", false, "Print meta information"); } @Override public int runCmd(CommandLine cmdLine) throws Exception { String[] leftArgs = cmdLine.getArgs(); if (leftArgs.length <= 0) { System.err.println("ERROR: missing ledger id"); printUsage(); return -1; } boolean printMeta = false; if (cmdLine.hasOption("m")) { printMeta = true; } long ledgerId; try { ledgerId = Long.parseLong(leftArgs[0]); } catch (NumberFormatException nfe) { System.err.println("ERROR: invalid ledger id " + leftArgs[0]); printUsage(); return -1; } if (printMeta) { // print meta readLedgerMeta(ledgerId); } // dump ledger info readLedgerIndexEntries(ledgerId); return 0; } @Override String getDescription() { return "Dump ledger index entries into readable format."; } @Override String getUsage() { return "ledger [-m] "; } @Override Options getOptions() { return lOpts; } } /** * Command for listing underreplicated ledgers */ class ListUnderreplicatedCmd extends MyCommand { Options opts = new Options(); public ListUnderreplicatedCmd() { super(CMD_LISTUNDERREPLICATED); } @Override Options getOptions() { return opts; } @Override String getDescription() { return "List ledgers marked as underreplicated"; } @Override String getUsage() { return "listunderreplicated"; } @Override int runCmd(CommandLine cmdLine) throws Exception { ZooKeeper zk = null; try { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(bkConf.getZkTimeout()); zk = ZkUtils.createConnectedZookeeperClient(bkConf.getZkServers(), w); LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bkConf, zk); LedgerUnderreplicationManager underreplicationManager = mFactory.newLedgerUnderreplicationManager(); Iterator iter = underreplicationManager.listLedgersToRereplicate(); while (iter.hasNext()) { System.out.println(iter.next()); } } finally { if (zk != null) { zk.close(); } } return 0; } } final static int LIST_BATCH_SIZE = 1000; /** * Command to list all ledgers in the cluster */ class ListLedgersCmd extends MyCommand { Options lOpts = new Options(); ListLedgersCmd() { super(CMD_LISTLEDGERS); lOpts.addOption("m", "meta", false, "Print metadata"); } @Override public int runCmd(CommandLine cmdLine) throws Exception { ZooKeeper zk = null; try { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(bkConf.getZkTimeout()); zk = ZkUtils.createConnectedZookeeperClient(bkConf.getZkServers(), w); LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bkConf, zk); LedgerManager m = mFactory.newLedgerManager(); LedgerRangeIterator iter = m.getLedgerRanges(); if (cmdLine.hasOption("m")) { List futures = new ArrayList(LIST_BATCH_SIZE); while (iter.hasNext()) { LedgerRange r = iter.next(); for (Long lid : r.getLedgers()) { ReadMetadataCallback cb = new ReadMetadataCallback(lid); m.readLedgerMetadata(lid, cb); futures.add(cb); } if (futures.size() >= LIST_BATCH_SIZE) { while (futures.size() > 0) { ReadMetadataCallback cb = futures.remove(0); printLedgerMetadata(cb); } } } while (futures.size() > 0) { ReadMetadataCallback cb = futures.remove(0); printLedgerMetadata(cb); } } else { while (iter.hasNext()) { LedgerRange r = iter.next(); for (Long lid : r.getLedgers()) { System.out.println(Long.toString(lid)); } } } } finally { if (zk != null) { zk.close(); } } return 0; } @Override String getDescription() { return "List all ledgers on the cluster (this may take a long time)"; } @Override String getUsage() { return "listledgers [-meta]"; } @Override Options getOptions() { return lOpts; } } static void printLedgerMetadata(ReadMetadataCallback cb) throws Exception { LedgerMetadata md = cb.get(); System.out.println("ledgerID: " + cb.getLedgerId()); System.out.println(new String(md.serialize(), UTF_8)); } static class ReadMetadataCallback extends AbstractFuture implements GenericCallback { final long ledgerId; ReadMetadataCallback(long ledgerId) { this.ledgerId = ledgerId; } long getLedgerId() { return ledgerId; } public void operationComplete(int rc, LedgerMetadata result) { if (rc != 0) { setException(BKException.create(rc)); } else { set(result); } } } /** * Print the metadata for a ledger */ class LedgerMetadataCmd extends MyCommand { Options lOpts = new Options(); LedgerMetadataCmd() { super(CMD_LEDGERMETADATA); lOpts.addOption("l", "ledgerid", true, "Ledger ID"); } @Override public int runCmd(CommandLine cmdLine) throws Exception { final long lid = getOptionLongValue(cmdLine, "ledgerid", -1); if (lid == -1) { System.err.println("Must specify a ledger id"); return -1; } ZooKeeper zk = null; try { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(bkConf.getZkTimeout()); zk = ZkUtils.createConnectedZookeeperClient(bkConf.getZkServers(), w); LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bkConf, zk); LedgerManager m = mFactory.newLedgerManager(); ReadMetadataCallback cb = new ReadMetadataCallback(lid); m.readLedgerMetadata(lid, cb); printLedgerMetadata(cb); } finally { if (zk != null) { zk.close(); } } return 0; } @Override String getDescription() { return "Print the metadata for a ledger"; } @Override String getUsage() { return "ledgermetadata -ledgerid "; } @Override Options getOptions() { return lOpts; } } /** * Simple test to create a ledger and write to it */ class SimpleTestCmd extends MyCommand { Options lOpts = new Options(); SimpleTestCmd() { super(CMD_SIMPLETEST); lOpts.addOption("e", "ensemble", true, "Ensemble size (default 3)"); lOpts.addOption("w", "writeQuorum", true, "Write quorum size (default 2)"); lOpts.addOption("a", "ackQuorum", true, "Ack quorum size (default 2)"); lOpts.addOption("n", "numEntries", true, "Entries to write (default 1000)"); } @Override public int runCmd(CommandLine cmdLine) throws Exception { byte[] data = new byte[100]; // test data int ensemble = getOptionIntValue(cmdLine, "ensemble", 3); int writeQuorum = getOptionIntValue(cmdLine, "writeQuorum", 2); int ackQuorum = getOptionIntValue(cmdLine, "ackQuorum", 2); int numEntries = getOptionIntValue(cmdLine, "numEntries", 1000); ClientConfiguration conf = new ClientConfiguration(); conf.addConfiguration(bkConf); BookKeeper bk = new BookKeeper(conf); LedgerHandle lh = bk.createLedger(ensemble, writeQuorum, ackQuorum, BookKeeper.DigestType.MAC, new byte[0]); System.out.println("Ledger ID: " + lh.getId()); long lastReport = System.nanoTime(); for (int i = 0; i < numEntries; i++) { lh.addEntry(data); if (TimeUnit.SECONDS.convert(System.nanoTime() - lastReport, TimeUnit.NANOSECONDS) > 1) { System.out.println(i + " entries written"); lastReport = System.nanoTime(); } } lh.close(); bk.close(); System.out.println(numEntries + " entries written to ledger " + lh.getId()); return 0; } @Override String getDescription() { return "Simple test to create a ledger and write entries to it"; } @Override String getUsage() { return "simpletest [-ensemble N] [-writeQuorum N] [-ackQuorum N] [-numEntries N]"; } @Override Options getOptions() { return lOpts; } } /** * Command to read entry log files. */ class ReadLogCmd extends MyCommand { Options rlOpts = new Options(); ReadLogCmd() { super(CMD_READLOG); rlOpts.addOption("m", "msg", false, "Print message body"); } @Override public int runCmd(CommandLine cmdLine) throws Exception { String[] leftArgs = cmdLine.getArgs(); if (leftArgs.length <= 0) { System.err.println("ERROR: missing entry log id or entry log file name"); printUsage(); return -1; } boolean printMsg = false; if (cmdLine.hasOption("m")) { printMsg = true; } long logId; try { logId = Long.parseLong(leftArgs[0]); } catch (NumberFormatException nfe) { // not a entry log id File f = new File(leftArgs[0]); String name = f.getName(); if (!name.endsWith(".log")) { // not a log file System.err.println("ERROR: invalid entry log file name " + leftArgs[0]); printUsage(); return -1; } String idString = name.split("\\.")[0]; logId = Long.parseLong(idString, 16); } // scan entry log scanEntryLog(logId, printMsg); return 0; } @Override String getDescription() { return "Scan an entry file and format the entries into readable format."; } @Override String getUsage() { return "readlog [-msg] "; } @Override Options getOptions() { return rlOpts; } } /** * Command to read journal files */ class ReadJournalCmd extends MyCommand { Options rjOpts = new Options(); ReadJournalCmd() { super(CMD_READJOURNAL); rjOpts.addOption("m", "msg", false, "Print message body"); } @Override public int runCmd(CommandLine cmdLine) throws Exception { String[] leftArgs = cmdLine.getArgs(); if (leftArgs.length <= 0) { System.err.println("ERROR: missing journal id or journal file name"); printUsage(); return -1; } boolean printMsg = false; if (cmdLine.hasOption("m")) { printMsg = true; } long journalId; try { journalId = Long.parseLong(leftArgs[0]); } catch (NumberFormatException nfe) { // not a journal id File f = new File(leftArgs[0]); String name = f.getName(); if (!name.endsWith(".txn")) { // not a journal file System.err.println("ERROR: invalid journal file name " + leftArgs[0]); printUsage(); return -1; } String idString = name.split("\\.")[0]; journalId = Long.parseLong(idString, 16); } // scan journal scanJournal(journalId, printMsg); return 0; } @Override String getDescription() { return "Scan a journal file and format the entries into readable format."; } @Override String getUsage() { return "readjournal [-msg] "; } @Override Options getOptions() { return rjOpts; } } /** * Command to print last log mark */ class LastMarkCmd extends MyCommand { LastMarkCmd() { super(CMD_LASTMARK); } @Override public int runCmd(CommandLine c) throws Exception { printLastLogMark(); return 0; } @Override String getDescription() { return "Print last log marker."; } @Override String getUsage() { return "lastmark"; } @Override Options getOptions() { return new Options(); } } /** * Command to print help message */ class HelpCmd extends MyCommand { HelpCmd() { super(CMD_HELP); } @Override public int runCmd(CommandLine cmdLine) throws Exception { String[] args = cmdLine.getArgs(); if (args.length == 0) { printShellUsage(); return 0; } String cmdName = args[0]; Command cmd = commands.get(cmdName); if (null == cmd) { System.err.println("Unknown command " + cmdName); printShellUsage(); return -1; } cmd.printUsage(); return 0; } @Override String getDescription() { return "Describe the usage of this program or its subcommands."; } @Override String getUsage() { return "help [COMMAND]"; } @Override Options getOptions() { return new Options(); } } /** * Command for administration of autorecovery */ class AutoRecoveryCmd extends MyCommand { Options opts = new Options(); public AutoRecoveryCmd() { super(CMD_AUTORECOVERY); opts.addOption("e", "enable", false, "Enable auto recovery of underreplicated ledgers"); opts.addOption("d", "disable", false, "Disable auto recovery of underreplicated ledgers"); } @Override Options getOptions() { return opts; } @Override String getDescription() { return "Enable or disable autorecovery in the cluster."; } @Override String getUsage() { return "autorecovery [-enable|-disable]"; } @Override int runCmd(CommandLine cmdLine) throws Exception { boolean disable = cmdLine.hasOption("d"); boolean enable = cmdLine.hasOption("e"); if ((!disable && !enable) || (enable && disable)) { LOG.error("One and only one of -enable and -disable must be specified"); printUsage(); return 1; } ZooKeeper zk = null; try { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(bkConf.getZkTimeout()); zk = ZkUtils.createConnectedZookeeperClient(bkConf.getZkServers(), w); LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bkConf, zk); LedgerUnderreplicationManager underreplicationManager = mFactory.newLedgerUnderreplicationManager(); if (enable) { if (underreplicationManager.isLedgerReplicationEnabled()) { LOG.warn("Autorecovery already enabled. Doing nothing"); } else { LOG.info("Enabling autorecovery"); underreplicationManager.enableLedgerReplication(); } } else { if (!underreplicationManager.isLedgerReplicationEnabled()) { LOG.warn("Autorecovery already disabled. Doing nothing"); } else { LOG.info("Disabling autorecovery"); underreplicationManager.disableLedgerReplication(); } } } finally { if (zk != null) { zk.close(); } } return 0; } } /** * Print which node has the auditor lock */ class WhoIsAuditorCmd extends MyCommand { Options opts = new Options(); public WhoIsAuditorCmd() { super(CMD_WHOISAUDITOR); } @Override Options getOptions() { return opts; } @Override String getDescription() { return "Print the node which holds the auditor lock"; } @Override String getUsage() { return "whoisauditor"; } @Override int runCmd(CommandLine cmdLine) throws Exception { ZooKeeper zk = null; try { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(bkConf.getZkTimeout()); zk = ZkUtils.createConnectedZookeeperClient(bkConf.getZkServers(), w); InetSocketAddress bookieId = AuditorElector.getCurrentAuditor(bkConf, zk); if (bookieId == null) { LOG.info("No auditor elected"); return -1; } LOG.info("Auditor: {}/{}:{}", new Object[] { bookieId.getAddress().getCanonicalHostName(), bookieId.getAddress().getHostAddress(), bookieId.getPort() }); } finally { if (zk != null) { zk.close(); } } return 0; } } final Map commands = new HashMap(); { commands.put(CMD_METAFORMAT, new MetaFormatCmd()); commands.put(CMD_BOOKIEFORMAT, new BookieFormatCmd()); commands.put(CMD_RECOVER, new RecoverCmd()); commands.put(CMD_LEDGER, new LedgerCmd()); commands.put(CMD_LISTLEDGERS, new ListLedgersCmd()); commands.put(CMD_LISTUNDERREPLICATED, new ListUnderreplicatedCmd()); commands.put(CMD_WHOISAUDITOR, new WhoIsAuditorCmd()); commands.put(CMD_LEDGERMETADATA, new LedgerMetadataCmd()); commands.put(CMD_SIMPLETEST, new SimpleTestCmd()); commands.put(CMD_READLOG, new ReadLogCmd()); commands.put(CMD_READJOURNAL, new ReadJournalCmd()); commands.put(CMD_LASTMARK, new LastMarkCmd()); commands.put(CMD_AUTORECOVERY, new AutoRecoveryCmd()); commands.put(CMD_HELP, new HelpCmd()); } @Override public void setConf(Configuration conf) throws Exception { bkConf.loadConf(conf); journalDirectory = Bookie.getCurrentDirectory(bkConf.getJournalDir()); ledgerDirectories = Bookie.getCurrentDirectories(bkConf.getLedgerDirs()); formatter = EntryFormatter.newEntryFormatter(bkConf, ENTRY_FORMATTER_CLASS); LOG.info("Using entry formatter " + formatter.getClass().getName()); pageSize = bkConf.getPageSize(); entriesPerPage = pageSize / 8; } private void printShellUsage() { System.err.println("Usage: BookieShell [-conf configuration] "); System.err.println(); List commandNames = new ArrayList(); for (MyCommand c : commands.values()) { commandNames.add(" " + c.getUsage()); } Collections.sort(commandNames); for (String s : commandNames) { System.err.println(s); } } @Override public int run(String[] args) throws Exception { if (args.length <= 0) { printShellUsage(); return -1; } String cmdName = args[0]; Command cmd = commands.get(cmdName); if (null == cmd) { System.err.println("ERROR: Unknown command " + cmdName); printShellUsage(); return -1; } // prepare new args String[] newArgs = new String[args.length - 1]; System.arraycopy(args, 1, newArgs, 0, newArgs.length); return cmd.runCmd(newArgs); } public static void main(String argv[]) throws Exception { BookieShell shell = new BookieShell(); if (argv.length <= 0) { shell.printShellUsage(); System.exit(-1); } CompositeConfiguration conf = new CompositeConfiguration(); // load configuration if ("-conf".equals(argv[0])) { if (argv.length <= 1) { shell.printShellUsage(); System.exit(-1); } conf.addConfiguration(new PropertiesConfiguration( new File(argv[1]).toURI().toURL())); String[] newArgv = new String[argv.length - 2]; System.arraycopy(argv, 2, newArgv, 0, newArgv.length); argv = newArgv; } shell.setConf(conf); int res = shell.run(argv); System.exit(res); } /// /// Bookie File Operations /// /** * Get the ledger file of a specified ledger. * * @param ledgerId * Ledger Id * * @return file object. */ private File getLedgerFile(long ledgerId) { String ledgerName = LedgerCacheImpl.getLedgerName(ledgerId); File lf = null; for (File d : ledgerDirectories) { lf = new File(d, ledgerName); if (lf.exists()) { break; } lf = null; } return lf; } /** * Get FileInfo for a specified ledger. * * @param ledgerId * Ledger Id * @return read only file info instance */ ReadOnlyFileInfo getFileInfo(long ledgerId) throws IOException { File ledgerFile = getLedgerFile(ledgerId); if (null == ledgerFile) { throw new FileNotFoundException("No index file found for ledger " + ledgerId + ". It may be not flushed yet."); } ReadOnlyFileInfo fi = new ReadOnlyFileInfo(ledgerFile, null); fi.readHeader(); return fi; } private synchronized void initEntryLogger() throws IOException { if (null == entryLogger) { // provide read only entry logger entryLogger = new ReadOnlyEntryLogger(bkConf); } } /** * scan over entry log * * @param logId * Entry Log Id * @param scanner * Entry Log Scanner */ protected void scanEntryLog(long logId, EntryLogScanner scanner) throws IOException { initEntryLogger(); entryLogger.scanEntryLog(logId, scanner); } private synchronized Journal getJournal() throws IOException { if (null == journal) { journal = new Journal(bkConf, new LedgerDirsManager(bkConf)); } return journal; } /** * Scan journal file * * @param journalId * Journal File Id * @param scanner * Journal File Scanner */ protected void scanJournal(long journalId, JournalScanner scanner) throws IOException { getJournal().scanJournal(journalId, 0L, scanner); } /// /// Bookie Shell Commands /// /** * Read ledger meta * * @param ledgerId * Ledger Id */ protected void readLedgerMeta(long ledgerId) throws Exception { System.out.println("===== LEDGER: " + ledgerId + " ====="); FileInfo fi = getFileInfo(ledgerId); byte[] masterKey = fi.getMasterKey(); if (null == masterKey) { System.out.println("master key : NULL"); } else { System.out.println("master key : " + bytes2Hex(fi.getMasterKey())); } long size = fi.size(); if (size % 8 == 0) { System.out.println("size : " + size); } else { System.out.println("size : " + size + " (not aligned with 8, may be corrupted or under flushing now)"); } System.out.println("entries : " + (size / 8)); } /** * Read ledger index entires * * @param ledgerId * Ledger Id * @throws IOException */ protected void readLedgerIndexEntries(long ledgerId) throws IOException { System.out.println("===== LEDGER: " + ledgerId + " ====="); FileInfo fi = getFileInfo(ledgerId); long size = fi.size(); System.out.println("size : " + size); long curSize = 0; long curEntry = 0; LedgerEntryPage lep = new LedgerEntryPage(pageSize, entriesPerPage); lep.usePage(); try { while (curSize < size) { lep.setLedger(ledgerId); lep.setFirstEntry(curEntry); lep.readPage(fi); // process a page for (int i=0; i> 32L; long pos = offset & 0xffffffffL; System.out.println("entry " + curEntry + "\t:\t(log:" + entryLogId + ", pos: " + pos + ")"); } ++curEntry; } curSize += pageSize; } } catch (IOException ie) { LOG.error("Failed to read index page : ", ie); if (curSize + pageSize < size) { System.out.println("Failed to read index page @ " + curSize + ", the index file may be corrupted : " + ie.getMessage()); } else { System.out.println("Failed to read last index page @ " + curSize + ", the index file may be corrupted or last index page is not fully flushed yet : " + ie.getMessage()); } } } /** * Scan over an entry log file. * * @param logId * Entry Log File id. * @param printMsg * Whether printing the entry data. */ protected void scanEntryLog(long logId, final boolean printMsg) throws Exception { System.out.println("Scan entry log " + logId + " (" + Long.toHexString(logId) + ".log)"); scanEntryLog(logId, new EntryLogScanner() { @Override public boolean accept(long ledgerId) { return true; } @Override public void process(long ledgerId, long startPos, ByteBuffer entry) { formatEntry(startPos, entry, printMsg); } }); } /** * Scan a journal file * * @param journalId * Journal File Id * @param printMsg * Whether printing the entry data. */ protected void scanJournal(long journalId, final boolean printMsg) throws Exception { System.out.println("Scan journal " + journalId + " (" + Long.toHexString(journalId) + ".txn)"); scanJournal(journalId, new JournalScanner() { boolean printJournalVersion = false; @Override public void process(int journalVersion, long offset, ByteBuffer entry) throws IOException { if (!printJournalVersion) { System.out.println("Journal Version : " + journalVersion); printJournalVersion = true; } formatEntry(offset, entry, printMsg); } }); } /** * Print last log mark */ protected void printLastLogMark() throws IOException { LastLogMark lastLogMark = getJournal().getLastLogMark(); System.out.println("LastLogMark: Journal Id - " + lastLogMark.getTxnLogId() + "(" + Long.toHexString(lastLogMark.getTxnLogId()) + ".txn), Pos - " + lastLogMark.getTxnLogPosition()); } /** * Format the message into a readable format. * * @param pos * File offset of the message stored in entry log file * @param recBuff * Entry Data * @param printMsg * Whether printing the message body */ private void formatEntry(long pos, ByteBuffer recBuff, boolean printMsg) { long ledgerId = recBuff.getLong(); long entryId = recBuff.getLong(); int entrySize = recBuff.limit(); System.out.println("--------- Lid=" + ledgerId + ", Eid=" + entryId + ", ByteOffset=" + pos + ", EntrySize=" + entrySize + " ---------"); if (entryId == Bookie.METAENTRY_ID_LEDGER_KEY) { int masterKeyLen = recBuff.getInt(); byte[] masterKey = new byte[masterKeyLen]; recBuff.get(masterKey); System.out.println("Type: META"); System.out.println("MasterKey: " + bytes2Hex(masterKey)); System.out.println(); return; } if (entryId == Bookie.METAENTRY_ID_FENCE_KEY) { System.out.println("Type: META"); System.out.println("Fenced"); System.out.println(); return; } // process a data entry long lastAddConfirmed = recBuff.getLong(); System.out.println("Type: DATA"); System.out.println("LastConfirmed: " + lastAddConfirmed); if (!printMsg) { System.out.println(); return; } // skip digest checking recBuff.position(32 + 8); System.out.println("Data:"); System.out.println(); try { byte[] ret = new byte[recBuff.remaining()]; recBuff.get(ret); formatter.formatEntry(ret); } catch (Exception e) { System.out.println("N/A. Corrupted."); } System.out.println(); } static String bytes2Hex(byte[] data) { StringBuilder sb = new StringBuilder(data.length * 2); Formatter formatter = new Formatter(sb); for (byte b : data) { formatter.format("%02x", b); } return sb.toString(); } private static int getOptionIntValue(CommandLine cmdLine, String option, int defaultVal) { if (cmdLine.hasOption(option)) { String val = cmdLine.getOptionValue(option); try { return Integer.parseInt(val); } catch (NumberFormatException nfe) { System.err.println("ERROR: invalid value for option " + option + " : " + val); return defaultVal; } } return defaultVal; } private static long getOptionLongValue(CommandLine cmdLine, String option, long defaultVal) { if (cmdLine.hasOption(option)) { String val = cmdLine.getOptionValue(option); try { return Long.parseLong(val); } catch (NumberFormatException nfe) { System.err.println("ERROR: invalid value for option " + option + " : " + val); return defaultVal; } } return defaultVal; } } BufferedChannel.java000066400000000000000000000144551244507361200345750ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; /** * Provides a buffering layer in front of a FileChannel. */ public class BufferedChannel { ByteBuffer writeBuffer; ByteBuffer readBuffer; private FileChannel bc; long position; int capacity; long readBufferStartPosition; long writeBufferStartPosition; // make constructor to be public for unit test public BufferedChannel(FileChannel bc, int capacity) throws IOException { this.bc = bc; this.capacity = capacity; position = bc.position(); writeBufferStartPosition = position; } /** * @return file channel */ FileChannel getFileChannel() { return this.bc; } /* public void close() throws IOException { bc.close(); } */ // public boolean isOpen() { // return bc.isOpen(); // } synchronized public int write(ByteBuffer src) throws IOException { int copied = 0; if (writeBuffer == null) { writeBuffer = ByteBuffer.allocateDirect(capacity); } while(src.remaining() > 0) { int truncated = 0; if (writeBuffer.remaining() < src.remaining()) { truncated = src.remaining() - writeBuffer.remaining(); src.limit(src.limit()-truncated); } copied += src.remaining(); writeBuffer.put(src); src.limit(src.limit()+truncated); if (writeBuffer.remaining() == 0) { writeBuffer.flip(); bc.write(writeBuffer); writeBuffer.clear(); writeBufferStartPosition = bc.position(); } } position += copied; return copied; } public long position() { return position; } /** * Retrieve the current size of the underlying FileChannel * * @return FileChannel size measured in bytes * * @throws IOException if some I/O error occurs reading the FileChannel */ public long size() throws IOException { return bc.size(); } public void flush(boolean sync) throws IOException { synchronized(this) { if (writeBuffer == null) { return; } writeBuffer.flip(); bc.write(writeBuffer); writeBuffer.clear(); writeBufferStartPosition = bc.position(); } if (sync) { bc.force(false); } } /*public Channel getInternalChannel() { return bc; }*/ synchronized public int read(ByteBuffer buff, long pos) throws IOException { if (readBuffer == null) { readBuffer = ByteBuffer.allocateDirect(capacity); readBufferStartPosition = Long.MIN_VALUE; } long prevPos = pos; while(buff.remaining() > 0) { // check if it is in the write buffer if (writeBuffer != null && writeBufferStartPosition <= pos) { long positionInBuffer = pos - writeBufferStartPosition; long bytesToCopy = writeBuffer.position()-positionInBuffer; if (bytesToCopy > buff.remaining()) { bytesToCopy = buff.remaining(); } if (bytesToCopy == 0) { throw new IOException("Read past EOF"); } ByteBuffer src = writeBuffer.duplicate(); src.position((int) positionInBuffer); src.limit((int) (positionInBuffer+bytesToCopy)); buff.put(src); pos+= bytesToCopy; } else if (writeBuffer == null && writeBufferStartPosition <= pos) { // here we reach the end break; // first check if there is anything we can grab from the readBuffer } else if (readBufferStartPosition <= pos && pos < readBufferStartPosition+readBuffer.capacity()) { long positionInBuffer = pos - readBufferStartPosition; long bytesToCopy = readBuffer.capacity()-positionInBuffer; if (bytesToCopy > buff.remaining()) { bytesToCopy = buff.remaining(); } ByteBuffer src = readBuffer.duplicate(); src.position((int) positionInBuffer); src.limit((int) (positionInBuffer+bytesToCopy)); buff.put(src); pos += bytesToCopy; // let's read it } else { readBufferStartPosition = pos; readBuffer.clear(); // make sure that we don't overlap with the write buffer if (readBufferStartPosition + readBuffer.capacity() >= writeBufferStartPosition) { readBufferStartPosition = writeBufferStartPosition - readBuffer.capacity(); if (readBufferStartPosition < 0) { readBuffer.put(LedgerEntryPage.zeroPage, 0, (int)-readBufferStartPosition); } } while(readBuffer.remaining() > 0) { if (bc.read(readBuffer, readBufferStartPosition+readBuffer.position()) <= 0) { throw new IOException("Short read"); } } readBuffer.put(LedgerEntryPage.zeroPage, 0, readBuffer.remaining()); readBuffer.clear(); } } return (int)(pos - prevPos); } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/Cookie.java000066400000000000000000000234601244507361200330460ustar00rootroot00000000000000/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.BufferedReader; import java.io.EOFException; import java.io.File; import java.io.FileOutputStream; import java.io.FileReader; import java.io.OutputStreamWriter; import java.io.BufferedWriter; import java.io.IOException; import java.io.StringReader; import java.net.UnknownHostException; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.data.Stat; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.proto.DataFormats.CookieFormat; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.TextFormat; /** * When a bookie starts for the first time it generates a cookie, and stores * the cookie in zookeeper as well as in the each of the local filesystem * directories it uses. This cookie is used to ensure that for the life of the * bookie, its configuration stays the same. If any of the bookie directories * becomes unavailable, the bookie becomes unavailable. If the bookie changes * port, it must also reset all of its data. * * This is done to ensure data integrity. Without the cookie a bookie could * start with one of its ledger directories missing, so data would be missing, * but the bookie would be up, so the client would think that everything is ok * with the cluster. It's better to fail early and obviously. */ class Cookie { static Logger LOG = LoggerFactory.getLogger(Cookie.class); static final int CURRENT_COOKIE_LAYOUT_VERSION = 4; private int layoutVersion = 0; private String bookieHost = null; private String journalDir = null; private String ledgerDirs = null; private int znodeVersion = -1; private String instanceId = null; private Cookie() { } public void verify(Cookie c) throws BookieException.InvalidCookieException { String errMsg; if (c.layoutVersion < 3 && c.layoutVersion != layoutVersion) { errMsg = "Cookie is of too old version " + c.layoutVersion; LOG.error(errMsg); throw new BookieException.InvalidCookieException(errMsg); } else if (!(c.layoutVersion >= 3 && c.bookieHost.equals(bookieHost) && c.journalDir.equals(journalDir) && c.ledgerDirs .equals(ledgerDirs))) { errMsg = "Cookie [" + this + "] is not matching with [" + c + "]"; throw new BookieException.InvalidCookieException(errMsg); } else if ((instanceId == null && c.instanceId != null) || (instanceId != null && !instanceId.equals(c.instanceId))) { // instanceId should be same in both cookies errMsg = "instanceId " + instanceId + " is not matching with " + c.instanceId; throw new BookieException.InvalidCookieException(errMsg); } } public String toString() { if (layoutVersion <= 3) { return toStringVersion3(); } CookieFormat.Builder builder = CookieFormat.newBuilder(); builder.setBookieHost(bookieHost); builder.setJournalDir(journalDir); builder.setLedgerDirs(ledgerDirs); if (null != instanceId) { builder.setInstanceId(instanceId); } StringBuilder b = new StringBuilder(); b.append(CURRENT_COOKIE_LAYOUT_VERSION).append("\n"); b.append(TextFormat.printToString(builder.build())); return b.toString(); } private String toStringVersion3() { StringBuilder b = new StringBuilder(); b.append(CURRENT_COOKIE_LAYOUT_VERSION).append("\n") .append(bookieHost).append("\n") .append(journalDir).append("\n") .append(ledgerDirs).append("\n"); return b.toString(); } private static Cookie parse(BufferedReader reader) throws IOException { Cookie c = new Cookie(); String line = reader.readLine(); if (null == line) { throw new EOFException("Exception in parsing cookie"); } try { c.layoutVersion = Integer.parseInt(line.trim()); } catch (NumberFormatException e) { throw new IOException("Invalid string '" + line.trim() + "', cannot parse cookie."); } if (c.layoutVersion == 3) { c.bookieHost = reader.readLine(); c.journalDir = reader.readLine(); c.ledgerDirs = reader.readLine(); } else if (c.layoutVersion >= 4) { CookieFormat.Builder builder = CookieFormat.newBuilder(); TextFormat.merge(reader, builder); CookieFormat data = builder.build(); c.bookieHost = data.getBookieHost(); c.journalDir = data.getJournalDir(); c.ledgerDirs = data.getLedgerDirs(); // Since InstanceId is optional if (null != data.getInstanceId() && !data.getInstanceId().isEmpty()) { c.instanceId = data.getInstanceId(); } } return c; } void writeToDirectory(File directory) throws IOException { File versionFile = new File(directory, BookKeeperConstants.VERSION_FILENAME); FileOutputStream fos = new FileOutputStream(versionFile); BufferedWriter bw = null; try { bw = new BufferedWriter(new OutputStreamWriter(fos)); bw.write(toString()); } finally { if (bw != null) { bw.close(); } fos.close(); } } void writeToZooKeeper(ZooKeeper zk, ServerConfiguration conf) throws KeeperException, InterruptedException, UnknownHostException { String bookieCookiePath = conf.getZkLedgersRootPath() + "/" + BookKeeperConstants.COOKIE_NODE; String zkPath = getZkPath(conf); byte[] data = toString().getBytes(); if (znodeVersion != -1) { zk.setData(zkPath, data, znodeVersion); } else { if (zk.exists(bookieCookiePath, false) == null) { try { zk.create(bookieCookiePath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException.NodeExistsException nne) { LOG.info("More than one bookie tried to create {} at once. Safe to ignore", bookieCookiePath); } } zk.create(zkPath, data, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); Stat stat = zk.exists(zkPath, false); this.znodeVersion = stat.getVersion(); } } void deleteFromZooKeeper(ZooKeeper zk, ServerConfiguration conf) throws KeeperException, InterruptedException, UnknownHostException { String zkPath = getZkPath(conf); if (znodeVersion != -1) { zk.delete(zkPath, znodeVersion); } znodeVersion = -1; } static Cookie generateCookie(ServerConfiguration conf) throws UnknownHostException { Cookie c = new Cookie(); c.layoutVersion = CURRENT_COOKIE_LAYOUT_VERSION; c.bookieHost = StringUtils.addrToString(Bookie.getBookieAddress(conf)); c.journalDir = conf.getJournalDirName(); StringBuilder b = new StringBuilder(); String[] dirs = conf.getLedgerDirNames(); b.append(dirs.length); for (String d : dirs) { b.append("\t").append(d); } c.ledgerDirs = b.toString(); return c; } static Cookie readFromZooKeeper(ZooKeeper zk, ServerConfiguration conf) throws KeeperException, InterruptedException, IOException, UnknownHostException { String zkPath = getZkPath(conf); Stat stat = zk.exists(zkPath, false); byte[] data = zk.getData(zkPath, false, stat); BufferedReader reader = new BufferedReader(new StringReader(new String( data))); try { Cookie c = parse(reader); c.znodeVersion = stat.getVersion(); return c; } finally { reader.close(); } } static Cookie readFromDirectory(File directory) throws IOException { File versionFile = new File(directory, BookKeeperConstants.VERSION_FILENAME); BufferedReader reader = new BufferedReader(new FileReader(versionFile)); try { return parse(reader); } finally { reader.close(); } } public void setInstanceId(String instanceId) { this.instanceId = instanceId; } private static String getZkPath(ServerConfiguration conf) throws UnknownHostException { String bookieCookiePath = conf.getZkLedgersRootPath() + "/" + BookKeeperConstants.COOKIE_NODE; return bookieCookiePath + "/" + StringUtils.addrToString(Bookie.getBookieAddress(conf)); } } EntryLogger.java000066400000000000000000000506731244507361200340250ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.File; import java.io.FileFilter; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStreamWriter; import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Map.Entry; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.CopyOnWriteArrayList; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.bookkeeper.bookie.LedgerDirsManager.LedgerDirsListener; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.util.IOUtils; /** * This class manages the writing of the bookkeeper entries. All the new * entries are written to a common log. The LedgerCache will have pointers * into files created by this class with offsets into the files to find * the actual ledger entry. The entry log files created by this class are * identified by a long. */ public class EntryLogger { private static final Logger LOG = LoggerFactory.getLogger(EntryLogger.class); volatile File currentDir; private LedgerDirsManager ledgerDirsManager; private AtomicBoolean shouldCreateNewEntryLog = new AtomicBoolean(false); private long logId; /** * The maximum size of a entry logger file. */ final long logSizeLimit; private volatile BufferedChannel logChannel; private final CopyOnWriteArrayList listeners = new CopyOnWriteArrayList(); // this indicates that a write has happened since the last flush private volatile boolean somethingWritten = false; /** * The 1K block at the head of the entry logger file * that contains the fingerprint and (future) meta-data */ final static int LOGFILE_HEADER_SIZE = 1024; final ByteBuffer LOGFILE_HEADER = ByteBuffer.allocate(LOGFILE_HEADER_SIZE); final static int MIN_SANE_ENTRY_SIZE = 8 + 8; final static long MB = 1024 * 1024; /** * Scan entries in a entry log file. */ static interface EntryLogScanner { /** * Tests whether or not the entries belongs to the specified ledger * should be processed. * * @param ledgerId * Ledger ID. * @return true if and only the entries of the ledger should be scanned. */ public boolean accept(long ledgerId); /** * Process an entry. * * @param ledgerId * Ledger ID. * @param offset * File offset of this entry. * @param entry * Entry ByteBuffer * @throws IOException */ public void process(long ledgerId, long offset, ByteBuffer entry) throws IOException; } /** * Entry Log Listener */ static interface EntryLogListener { /** * Callback when entry log is flushed. */ public void onEntryLogFlushed(); } /** * Create an EntryLogger that stores it's log files in the given * directories */ public EntryLogger(ServerConfiguration conf, LedgerDirsManager ledgerDirsManager) throws IOException { this.ledgerDirsManager = ledgerDirsManager; // log size limit this.logSizeLimit = conf.getEntryLogSizeLimit(); // Initialize the entry log header buffer. This cannot be a static object // since in our unit tests, we run multiple Bookies and thus EntryLoggers // within the same JVM. All of these Bookie instances access this header // so there can be race conditions when entry logs are rolled over and // this header buffer is cleared before writing it into the new logChannel. LOGFILE_HEADER.put("BKLO".getBytes()); // Find the largest logId logId = -1; for (File dir : ledgerDirsManager.getAllLedgerDirs()) { if (!dir.exists()) { throw new FileNotFoundException( "Entry log directory does not exist"); } long lastLogId = getLastLogId(dir); if (lastLogId > logId) { logId = lastLogId; } } initialize(); } void addListener(EntryLogListener listener) { if (null != listener) { listeners.add(listener); } } /** * Maps entry log files to open channels. */ private ConcurrentHashMap channels = new ConcurrentHashMap(); synchronized long getCurrentLogId() { return logId; } protected void initialize() throws IOException { // Register listener for disk full notifications. ledgerDirsManager.addLedgerDirsListener(getLedgerDirsListener()); // create a new log to write createNewLog(); } private LedgerDirsListener getLedgerDirsListener() { return new LedgerDirsListener() { @Override public void diskFull(File disk) { // If the current entry log disk is full, then create new entry // log. if (currentDir != null && currentDir.equals(disk)) { shouldCreateNewEntryLog.set(true); } } @Override public void diskFailed(File disk) { // Nothing to handle here. Will be handled in Bookie } @Override public void allDisksFull() { // Nothing to handle here. Will be handled in Bookie } @Override public void fatalError() { // Nothing to handle here. Will be handled in Bookie } }; } /** * Creates a new log file */ void createNewLog() throws IOException { if (logChannel != null) { logChannel.flush(true); } // It would better not to overwrite existing entry log files String logFileName = null; do { logFileName = Long.toHexString(++logId) + ".log"; for (File dir : ledgerDirsManager.getAllLedgerDirs()) { File newLogFile = new File(dir, logFileName); if (newLogFile.exists()) { LOG.warn("Found existed entry log " + newLogFile + " when trying to create it as a new log."); logFileName = null; break; } } } while (logFileName == null); // Update last log id first currentDir = ledgerDirsManager.pickRandomWritableDir(); setLastLogId(currentDir, logId); File newLogFile = new File(currentDir, logFileName); logChannel = new BufferedChannel(new RandomAccessFile(newLogFile, "rw").getChannel(), 64*1024); logChannel.write((ByteBuffer) LOGFILE_HEADER.clear()); channels.put(logId, logChannel); } /** * Remove entry log. * * @param entryLogId * Entry Log File Id */ protected boolean removeEntryLog(long entryLogId) { BufferedChannel bc = channels.remove(entryLogId); if (null != bc) { // close its underlying file channel, so it could be deleted really try { bc.getFileChannel().close(); } catch (IOException ie) { LOG.warn("Exception while closing garbage collected entryLog file : ", ie); } } File entryLogFile; try { entryLogFile = findFile(entryLogId); } catch (FileNotFoundException e) { LOG.error("Trying to delete an entryLog file that could not be found: " + entryLogId + ".log"); return false; } if (!entryLogFile.delete()) { LOG.warn("Could not delete entry log file {}", entryLogFile); } return true; } /** * writes the given id to the "lastId" file in the given directory. */ private void setLastLogId(File dir, long logId) throws IOException { FileOutputStream fos; fos = new FileOutputStream(new File(dir, "lastId")); BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(fos)); try { bw.write(Long.toHexString(logId) + "\n"); bw.flush(); } finally { try { bw.close(); } catch (IOException e) { } } } private long getLastLogId(File dir) { long id = readLastLogId(dir); // read success if (id > 0) { return id; } // read failed, scan the ledger directories to find biggest log id File[] logFiles = dir.listFiles(new FileFilter() { @Override public boolean accept(File file) { return file.getName().endsWith(".log"); } }); List logs = new ArrayList(); for (File lf : logFiles) { String idString = lf.getName().split("\\.")[0]; try { long lid = Long.parseLong(idString, 16); logs.add(lid); } catch (NumberFormatException nfe) { } } // no log file found in this directory if (0 == logs.size()) { return -1; } // order the collections Collections.sort(logs); return logs.get(logs.size() - 1); } /** * reads id from the "lastId" file in the given directory. */ private long readLastLogId(File f) { FileInputStream fis; try { fis = new FileInputStream(new File(f, "lastId")); } catch (FileNotFoundException e) { return -1; } BufferedReader br = new BufferedReader(new InputStreamReader(fis)); try { String lastIdString = br.readLine(); return Long.parseLong(lastIdString, 16); } catch (IOException e) { return -1; } catch(NumberFormatException e) { return -1; } finally { try { br.close(); } catch (IOException e) { } } } synchronized void flush() throws IOException { if (logChannel != null) { logChannel.flush(true); } somethingWritten = false; for (EntryLogListener listener: listeners) { listener.onEntryLogFlushed(); } } boolean isFlushRequired() { return somethingWritten; } synchronized long addEntry(long ledger, ByteBuffer entry) throws IOException { // Create new log if logSizeLimit reached or current disk is full boolean createNewLog = shouldCreateNewEntryLog.get(); if (createNewLog || (logChannel.position() + entry.remaining() + 4 > logSizeLimit)) { createNewLog(); // Reset the flag if (createNewLog) { shouldCreateNewEntryLog.set(false); } } ByteBuffer buff = ByteBuffer.allocate(4); buff.putInt(entry.remaining()); buff.flip(); logChannel.write(buff); long pos = logChannel.position(); logChannel.write(entry); //logChannel.flush(false); somethingWritten = true; return (logId << 32L) | pos; } byte[] readEntry(long ledgerId, long entryId, long location) throws IOException, Bookie.NoEntryException { long entryLogId = location >> 32L; long pos = location & 0xffffffffL; ByteBuffer sizeBuff = ByteBuffer.allocate(4); pos -= 4; // we want to get the ledgerId and length to check BufferedChannel fc; try { fc = getChannelForLogId(entryLogId); } catch (FileNotFoundException e) { FileNotFoundException newe = new FileNotFoundException(e.getMessage() + " for " + ledgerId + " with location " + location); newe.setStackTrace(e.getStackTrace()); throw newe; } if (fc.read(sizeBuff, pos) != sizeBuff.capacity()) { throw new Bookie.NoEntryException("Short read from entrylog " + entryLogId, ledgerId, entryId); } pos += 4; sizeBuff.flip(); int entrySize = sizeBuff.getInt(); // entrySize does not include the ledgerId if (entrySize > MB) { LOG.error("Sanity check failed for entry size of " + entrySize + " at location " + pos + " in " + entryLogId); } if (entrySize < MIN_SANE_ENTRY_SIZE) { LOG.error("Read invalid entry length {}", entrySize); throw new IOException("Invalid entry length " + entrySize); } byte data[] = new byte[entrySize]; ByteBuffer buff = ByteBuffer.wrap(data); int rc = fc.read(buff, pos); if ( rc != data.length) { // Note that throwing NoEntryException here instead of IOException is not // without risk. If all bookies in a quorum throw this same exception // the client will assume that it has reached the end of the ledger. // However, this may not be the case, as a very specific error condition // could have occurred, where the length of the entry was corrupted on all // replicas. However, the chance of this happening is very very low, so // returning NoEntryException is mostly safe. throw new Bookie.NoEntryException("Short read for " + ledgerId + "@" + entryId + " in " + entryLogId + "@" + pos + "("+rc+"!="+data.length+")", ledgerId, entryId); } buff.flip(); long thisLedgerId = buff.getLong(); if (thisLedgerId != ledgerId) { throw new IOException("problem found in " + entryLogId + "@" + entryId + " at position + " + pos + " entry belongs to " + thisLedgerId + " not " + ledgerId); } long thisEntryId = buff.getLong(); if (thisEntryId != entryId) { throw new IOException("problem found in " + entryLogId + "@" + entryId + " at position + " + pos + " entry is " + thisEntryId + " not " + entryId); } return data; } private BufferedChannel getChannelForLogId(long entryLogId) throws IOException { BufferedChannel fc = channels.get(entryLogId); if (fc != null) { return fc; } File file = findFile(entryLogId); // get channel is used to open an existing entry log file // it would be better to open using read mode FileChannel newFc = new RandomAccessFile(file, "r").getChannel(); // If the file already exists before creating a BufferedChannel layer above it, // set the FileChannel's position to the end so the write buffer knows where to start. newFc.position(newFc.size()); fc = new BufferedChannel(newFc, 8192); BufferedChannel oldfc = channels.putIfAbsent(entryLogId, fc); if (oldfc != null) { newFc.close(); return oldfc; } else { return fc; } } /** * Whether the log file exists or not. */ boolean logExists(long logId) { for (File d : ledgerDirsManager.getAllLedgerDirs()) { File f = new File(d, Long.toHexString(logId) + ".log"); if (f.exists()) { return true; } } return false; } private File findFile(long logId) throws FileNotFoundException { for (File d : ledgerDirsManager.getAllLedgerDirs()) { File f = new File(d, Long.toHexString(logId)+".log"); if (f.exists()) { return f; } } throw new FileNotFoundException("No file for log " + Long.toHexString(logId)); } /** * Scan entry log * * @param entryLogId * Entry Log Id * @param scanner * Entry Log Scanner * @throws IOException */ protected void scanEntryLog(long entryLogId, EntryLogScanner scanner) throws IOException { ByteBuffer sizeBuff = ByteBuffer.allocate(4); ByteBuffer lidBuff = ByteBuffer.allocate(8); BufferedChannel bc; // Get the BufferedChannel for the current entry log file try { bc = getChannelForLogId(entryLogId); } catch (IOException e) { LOG.warn("Failed to get channel to scan entry log: " + entryLogId + ".log"); throw e; } // Start the read position in the current entry log file to be after // the header where all of the ledger entries are. long pos = LOGFILE_HEADER_SIZE; // Read through the entry log file and extract the ledger ID's. while (true) { // Check if we've finished reading the entry log file. if (pos >= bc.size()) { break; } if (bc.read(sizeBuff, pos) != sizeBuff.capacity()) { throw new IOException("Short read for entry size from entrylog " + entryLogId); } long offset = pos; pos += 4; sizeBuff.flip(); int entrySize = sizeBuff.getInt(); if (entrySize > MB) { LOG.warn("Found large size entry of " + entrySize + " at location " + pos + " in " + entryLogId); } sizeBuff.clear(); // try to read ledger id first if (bc.read(lidBuff, pos) != lidBuff.capacity()) { throw new IOException("Short read for ledger id from entrylog " + entryLogId); } lidBuff.flip(); long lid = lidBuff.getLong(); lidBuff.clear(); if (!scanner.accept(lid)) { // skip this entry pos += entrySize; continue; } // read the entry byte data[] = new byte[entrySize]; ByteBuffer buff = ByteBuffer.wrap(data); int rc = bc.read(buff, pos); if (rc != data.length) { throw new IOException("Short read for ledger entry from entryLog " + entryLogId + "@" + pos + "(" + rc + "!=" + data.length + ")"); } buff.flip(); // process the entry scanner.process(lid, offset, buff); // Advance position to the next entry pos += entrySize; } } /** * Shutdown method to gracefully stop entry logger. */ public void shutdown() { // since logChannel is buffered channel, do flush when shutting down try { flush(); for (Entry channelEntry : channels .entrySet()) { channelEntry.getValue().getFileChannel().close(); } } catch (IOException ie) { // we have no idea how to avoid io exception during shutting down, so just ignore it LOG.error("Error flush entry log during shutting down, which may cause entry log corrupted.", ie); } finally { for (Entry channelEntry : channels .entrySet()) { FileChannel fileChannel = channelEntry.getValue() .getFileChannel(); if (fileChannel.isOpen()) { IOUtils.close(LOG, fileChannel); } } } } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/ExitCode.java000066400000000000000000000026341244507361200333410ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; /** * Exit code used to exit bookie server */ public class ExitCode { // normal quit public final static int OK = 0; // invalid configuration public final static int INVALID_CONF = 1; // exception running bookie server public final static int SERVER_EXCEPTION = 2; // zookeeper is expired public final static int ZK_EXPIRED = 3; // register bookie on zookeeper failed public final static int ZK_REG_FAIL = 4; // exception running bookie public final static int BOOKIE_EXCEPTION = 5; } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/FileInfo.java000066400000000000000000000276071244507361200333370ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.BufferUnderflowException; import java.nio.channels.FileChannel; import com.google.common.annotations.VisibleForTesting; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This is the file handle for a ledger's index file that maps entry ids to location. * It is used by LedgerCache. * *

* Ledger index file is made of a header and several fixed-length index pages, which records the offsets of data stored in entry loggers *

<header><index pages>
* Header is formated as below: *
<magic bytes><len of master key><master key>
*
    *
  • magic bytes: 4 bytes, 'BKLE', version: 4 bytes *
  • len of master key: indicates length of master key. -1 means no master key stored in header. *
  • master key: master key *
  • state: bit map to indicate the state, 32 bits. *
* Index page is a fixed-length page, which contains serveral entries which point to the offsets of data stored in entry loggers. *

*/ class FileInfo { static Logger LOG = LoggerFactory.getLogger(FileInfo.class); static final int NO_MASTER_KEY = -1; static final int STATE_FENCED_BIT = 0x1; private FileChannel fc; private File lf; byte[] masterKey; /** * The fingerprint of a ledger index file */ static final public int signature = ByteBuffer.wrap("BKLE".getBytes()).getInt(); static final public int headerVersion = 0; static final long START_OF_DATA = 1024; private long size; private int useCount; private boolean isClosed; private long sizeSinceLastwrite; // bit map for states of the ledger. private int stateBits; private boolean needFlushHeader = false; // file access mode protected String mode; public FileInfo(File lf, byte[] masterKey) throws IOException { this.lf = lf; this.masterKey = masterKey; mode = "rw"; } public File getLf() { return lf; } public long getSizeSinceLastwrite() { return sizeSinceLastwrite; } synchronized public void readHeader() throws IOException { if (lf.exists()) { if (fc != null) { return; } fc = new RandomAccessFile(lf, mode).getChannel(); size = fc.size(); sizeSinceLastwrite = size; // avoid hang on reading partial index ByteBuffer bb = ByteBuffer.allocate((int)(Math.min(size, START_OF_DATA))); while(bb.hasRemaining()) { fc.read(bb); } bb.flip(); if (bb.getInt() != signature) { throw new IOException("Missing ledger signature"); } int version = bb.getInt(); if (version != headerVersion) { throw new IOException("Incompatible ledger version " + version); } int length = bb.getInt(); if (length < 0) { throw new IOException("Length " + length + " is invalid"); } else if (length > bb.remaining()) { throw new BufferUnderflowException(); } masterKey = new byte[length]; bb.get(masterKey); stateBits = bb.getInt(); needFlushHeader = false; } else { throw new IOException("Ledger index file does not exist"); } } synchronized void checkOpen(boolean create) throws IOException { if (fc != null) { return; } boolean exists = lf.exists(); if (masterKey == null && !exists) { throw new IOException(lf + " not found"); } if (!exists) { if (create) { // delayed the creation of parents directories checkParents(lf); fc = new RandomAccessFile(lf, mode).getChannel(); size = fc.size(); if (size == 0) { writeHeader(); } } } else { try { readHeader(); } catch (BufferUnderflowException buf) { LOG.warn("Exception when reading header of {} : {}", lf, buf); if (null != masterKey) { LOG.warn("Attempting to write header of {} again.", lf); writeHeader(); } else { throw new IOException("Error reading header " + lf); } } } } private void writeHeader() throws IOException { ByteBuffer bb = ByteBuffer.allocate((int)START_OF_DATA); bb.putInt(signature); bb.putInt(headerVersion); bb.putInt(masterKey.length); bb.put(masterKey); bb.putInt(stateBits); bb.rewind(); fc.position(0); fc.write(bb); } synchronized public boolean isFenced() throws IOException { checkOpen(false); return (stateBits & STATE_FENCED_BIT) == STATE_FENCED_BIT; } /** * @return true if set fence succeed, otherwise false when * it already fenced or failed to set fenced. */ synchronized public boolean setFenced() throws IOException { checkOpen(false); LOG.debug("Try to set fenced state in file info {} : state bits {}.", lf, stateBits); if ((stateBits & STATE_FENCED_BIT) != STATE_FENCED_BIT) { // not fenced yet stateBits |= STATE_FENCED_BIT; needFlushHeader = true; return true; } else { return false; } } // flush the header when header is changed synchronized public void flushHeader() throws IOException { if (needFlushHeader) { checkOpen(true); writeHeader(); needFlushHeader = false; } } synchronized public long size() throws IOException { checkOpen(false); long rc = size-START_OF_DATA; if (rc < 0) { rc = 0; } return rc; } synchronized public int read(ByteBuffer bb, long position) throws IOException { return readAbsolute(bb, position + START_OF_DATA); } private int readAbsolute(ByteBuffer bb, long start) throws IOException { checkOpen(false); if (fc == null) { return 0; } int total = 0; while(bb.remaining() > 0) { int rc = fc.read(bb, start); if (rc <= 0) { throw new IOException("Short read"); } total += rc; // should move read position start += rc; } return total; } /** * Close a file info * * @param force * if set to true, the index is forced to create before closed, * if set to false, the index is not forced to create. */ synchronized public void close(boolean force) throws IOException { isClosed = true; checkOpen(force); // Any time when we force close a file, we should try to flush header. otherwise, we might lose fence bit. if (force) { flushHeader(); } if (useCount == 0 && fc != null) { fc.close(); } } synchronized public long write(ByteBuffer[] buffs, long position) throws IOException { checkOpen(true); long total = 0; try { fc.position(position+START_OF_DATA); while(buffs[buffs.length-1].remaining() > 0) { long rc = fc.write(buffs); if (rc <= 0) { throw new IOException("Short write"); } total += rc; } } finally { fc.force(true); long newsize = position+START_OF_DATA+total; if (newsize > size) { size = newsize; } } sizeSinceLastwrite = fc.size(); return total; } /** * Copies current file contents upto specified size to the target file and * deletes the current file. If size not known then pass size as * Long.MAX_VALUE to copy complete file. */ public synchronized void moveToNewLocation(File newFile, long size) throws IOException { checkOpen(false); if (fc != null) { if (size > fc.size()) { size = fc.size(); } File rlocFile = new File(newFile.getParentFile(), newFile.getName() + LedgerCacheImpl.RLOC); if (!rlocFile.exists()) { checkParents(rlocFile); if (!rlocFile.createNewFile()) { throw new IOException("Creating new cache index file " + rlocFile + " failed "); } } // copy contents from old.idx to new.idx.rloc FileChannel newFc = new RandomAccessFile(rlocFile, "rw").getChannel(); try { long written = 0; while (written < size) { long count = fc.transferTo(written, size, newFc); if (count <= 0) { throw new IOException("Copying to new location " + rlocFile + " failed"); } written += count; } if (written <= 0 && size > 0) { throw new IOException("Copying to new location " + rlocFile + " failed"); } } finally { newFc.force(true); newFc.close(); } // delete old.idx fc.close(); if (!delete()) { LOG.error("Failed to delete the previous index file " + lf); throw new IOException("Failed to delete the previous index file " + lf); } // rename new.idx.rloc to new.idx if (!rlocFile.renameTo(newFile)) { LOG.error("Failed to rename " + rlocFile + " to " + newFile); throw new IOException("Failed to rename " + rlocFile + " to " + newFile); } fc = new RandomAccessFile(newFile, mode).getChannel(); } lf = newFile; } synchronized public byte[] getMasterKey() throws IOException { checkOpen(false); return masterKey; } synchronized public void use() { useCount++; } @VisibleForTesting synchronized int getUseCount() { return useCount; } synchronized public void release() { useCount--; if (isClosed && useCount == 0 && fc != null) { try { fc.close(); } catch (IOException e) { LOG.error("Error closing file channel", e); } } } public boolean delete() { return lf.delete(); } static final private void checkParents(File f) throws IOException { File parent = f.getParentFile(); if (parent.exists()) { return; } if (parent.mkdirs() == false) { throw new IOException("Counldn't mkdirs for " + parent); } } } FileSystemUpgrade.java000066400000000000000000000357241244507361200351600ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.HardLink; import org.apache.commons.io.FileUtils; import org.apache.commons.cli.BasicParser; import org.apache.commons.cli.Options; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.HelpFormatter; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.File; import java.io.FilenameFilter; import java.io.IOException; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.Map; import java.util.HashMap; import java.util.List; import java.util.ArrayList; import java.util.Scanner; import java.util.NoSuchElementException; import java.net.MalformedURLException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.commons.configuration.ConfigurationException; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.KeeperException; /** * Application for upgrading the bookkeeper filesystem * between versions */ public class FileSystemUpgrade { static Logger LOG = LoggerFactory.getLogger(FileSystemUpgrade.class); static FilenameFilter BOOKIE_FILES_FILTER = new FilenameFilter() { private boolean containsIndexFiles(File dir, String name) { if (name.endsWith(".idx")) { return true; } try { Long.parseLong(name, 16); File d = new File(dir, name); if (d.isDirectory()) { String[] files = d.list(); for (String f : files) { if (containsIndexFiles(d, f)) { return true; } } } } catch (NumberFormatException nfe) { return false; } return false; } public boolean accept(File dir, String name) { if (name.endsWith(".txn") || name.endsWith(".log") || name.equals("lastId") || name.equals("lastMark")) { return true; } if (containsIndexFiles(dir, name)) { return true; } return false; } }; private static List getAllDirectories(ServerConfiguration conf) { List dirs = new ArrayList(); dirs.add(conf.getJournalDir()); for (File d: conf.getLedgerDirs()) { dirs.add(d); } return dirs; } private static int detectPreviousVersion(File directory) throws IOException { String[] files = directory.list(BOOKIE_FILES_FILTER); File v2versionFile = new File(directory, BookKeeperConstants.VERSION_FILENAME); if (files.length == 0 && !v2versionFile.exists()) { // no old data, so we're ok return Cookie.CURRENT_COOKIE_LAYOUT_VERSION; } if (!v2versionFile.exists()) { return 1; } Scanner s = new Scanner(v2versionFile); try { return s.nextInt(); } catch (NoSuchElementException nse) { LOG.error("Couldn't parse version file " + v2versionFile , nse); throw new IOException("Couldn't parse version file", nse); } catch (IllegalStateException ise) { LOG.error("Error reading file " + v2versionFile, ise); throw new IOException("Error reading version file", ise); } finally { s.close(); } } private static ZooKeeper newZookeeper(final ServerConfiguration conf) throws BookieException.UpgradeException { try { final CountDownLatch latch = new CountDownLatch(1); ZooKeeper zk = new ZooKeeper(conf.getZkServers(), conf.getZkTimeout(), new Watcher() { @Override public void process(WatchedEvent event) { // handle session disconnects and expires if (event.getState().equals(Watcher.Event.KeeperState.SyncConnected)) { latch.countDown(); } } }); if (!latch.await(conf.getZkTimeout()*2, TimeUnit.MILLISECONDS)) { zk.close(); throw new BookieException.UpgradeException("Couldn't connect to zookeeper"); } return zk; } catch (InterruptedException ie) { throw new BookieException.UpgradeException(ie); } catch (IOException ioe) { throw new BookieException.UpgradeException(ioe); } } private static void linkIndexDirectories(File srcPath, File targetPath) throws IOException { String[] files = srcPath.list(); for (String f : files) { if (f.endsWith(".idx")) { // this is an index dir, create the links if (!targetPath.mkdirs()) { throw new IOException("Could not create target path ["+targetPath+"]"); } HardLink.createHardLinkMult(srcPath, files, targetPath); return; } File newSrcPath = new File(srcPath, f); if (newSrcPath.isDirectory()) { try { Long.parseLong(f, 16); linkIndexDirectories(newSrcPath, new File(targetPath, f)); } catch (NumberFormatException nfe) { // filename does not parse to a hex Long, so // it will not contain idx files. Ignoring } } } } public static void upgrade(ServerConfiguration conf) throws BookieException.UpgradeException, InterruptedException { LOG.info("Upgrading..."); ZooKeeper zk = newZookeeper(conf); try { Map deferredMoves = new HashMap(); Cookie c = Cookie.generateCookie(conf); for (File d : getAllDirectories(conf)) { LOG.info("Upgrading {}", d); int version = detectPreviousVersion(d); if (version == Cookie.CURRENT_COOKIE_LAYOUT_VERSION) { LOG.info("Directory is current, no need to upgrade"); continue; } try { File curDir = new File(d, BookKeeperConstants.CURRENT_DIR); File tmpDir = new File(d, "upgradeTmp." + System.nanoTime()); deferredMoves.put(curDir, tmpDir); if (!tmpDir.mkdirs()) { throw new BookieException.UpgradeException("Could not create temporary directory " + tmpDir); } c.writeToDirectory(tmpDir); String[] files = d.list(new FilenameFilter() { public boolean accept(File dir, String name) { return BOOKIE_FILES_FILTER.accept(dir, name) && !(new File(dir, name).isDirectory()); } }); HardLink.createHardLinkMult(d, files, tmpDir); linkIndexDirectories(d, tmpDir); } catch (IOException ioe) { LOG.error("Error upgrading {}", d); throw new BookieException.UpgradeException(ioe); } } for (Map.Entry e : deferredMoves.entrySet()) { try { FileUtils.moveDirectory(e.getValue(), e.getKey()); } catch (IOException ioe) { String err = String.format("Error moving upgraded directories into place %s -> %s ", e.getValue(), e.getKey()); LOG.error(err, ioe); throw new BookieException.UpgradeException(ioe); } } if (deferredMoves.isEmpty()) { return; } try { c.writeToZooKeeper(zk, conf); } catch (KeeperException ke) { LOG.error("Error writing cookie to zookeeper"); throw new BookieException.UpgradeException(ke); } } catch (IOException ioe) { throw new BookieException.UpgradeException(ioe); } finally { zk.close(); } LOG.info("Done"); } public static void finalizeUpgrade(ServerConfiguration conf) throws BookieException.UpgradeException, InterruptedException { LOG.info("Finalizing upgrade..."); // verify that upgrade is correct for (File d : getAllDirectories(conf)) { LOG.info("Finalizing {}", d); try { int version = detectPreviousVersion(d); if (version < 3) { if (version == 2) { File v2versionFile = new File(d, BookKeeperConstants.VERSION_FILENAME); if (!v2versionFile.delete()) { LOG.warn("Could not delete old version file {}", v2versionFile); } } File[] files = d.listFiles(BOOKIE_FILES_FILTER); for (File f : files) { if (f.isDirectory()) { FileUtils.deleteDirectory(f); } else{ if (!f.delete()) { LOG.warn("Could not delete {}", f); } } } } } catch (IOException ioe) { LOG.error("Error finalizing {}", d); throw new BookieException.UpgradeException(ioe); } } // noop at the moment LOG.info("Done"); } public static void rollback(ServerConfiguration conf) throws BookieException.UpgradeException, InterruptedException { LOG.info("Rolling back upgrade..."); ZooKeeper zk = newZookeeper(conf); try { for (File d : getAllDirectories(conf)) { LOG.info("Rolling back {}", d); try { // ensure there is a previous version before rollback int version = detectPreviousVersion(d); if (version <= Cookie.CURRENT_COOKIE_LAYOUT_VERSION) { File curDir = new File(d, BookKeeperConstants.CURRENT_DIR); FileUtils.deleteDirectory(curDir); } else { throw new BookieException.UpgradeException( "Cannot rollback as previous data does not exist"); } } catch (IOException ioe) { LOG.error("Error rolling back {}", d); throw new BookieException.UpgradeException(ioe); } } try { Cookie c = Cookie.readFromZooKeeper(zk, conf); c.deleteFromZooKeeper(zk, conf); } catch (KeeperException ke) { LOG.error("Error deleting cookie from ZooKeeper"); throw new BookieException.UpgradeException(ke); } catch (IOException ioe) { LOG.error("I/O Error deleting cookie from ZooKeeper"); throw new BookieException.UpgradeException(ioe); } } finally { zk.close(); } LOG.info("Done"); } private static void printHelp(Options opts) { HelpFormatter hf = new HelpFormatter(); hf.printHelp("FileSystemUpgrade [options]", opts); } public static void main(String[] args) throws Exception { org.apache.log4j.Logger root = org.apache.log4j.Logger.getRootLogger(); root.addAppender(new org.apache.log4j.ConsoleAppender( new org.apache.log4j.PatternLayout("%-5p [%t]: %m%n"))); root.setLevel(org.apache.log4j.Level.ERROR); org.apache.log4j.Logger.getLogger(FileSystemUpgrade.class).setLevel( org.apache.log4j.Level.INFO); final Options opts = new Options(); opts.addOption("c", "conf", true, "Configuration for Bookie"); opts.addOption("u", "upgrade", false, "Upgrade bookie directories"); opts.addOption("f", "finalize", false, "Finalize upgrade"); opts.addOption("r", "rollback", false, "Rollback upgrade"); opts.addOption("h", "help", false, "Print help message"); BasicParser parser = new BasicParser(); CommandLine cmdLine = parser.parse(opts, args); if (cmdLine.hasOption("h")) { printHelp(opts); return; } if (!cmdLine.hasOption("c")) { String err = "Cannot upgrade without configuration"; LOG.error(err); printHelp(opts); throw new IllegalArgumentException(err); } String confFile = cmdLine.getOptionValue("c"); ServerConfiguration conf = new ServerConfiguration(); try { conf.loadConf(new File(confFile).toURI().toURL()); } catch (MalformedURLException mue) { LOG.error("Could not open configuration file " + confFile, mue); throw new IllegalArgumentException(); } catch (ConfigurationException ce) { LOG.error("Invalid configuration file " + confFile, ce); throw new IllegalArgumentException(); } if (cmdLine.hasOption("u")) { upgrade(conf); } else if (cmdLine.hasOption("r")) { rollback(conf); } else if (cmdLine.hasOption("f")) { finalizeUpgrade(conf); } else { String err = "Must specify -upgrade, -finalize or -rollback"; LOG.error(err); printHelp(opts); throw new IllegalArgumentException(err); } } } GarbageCollector.java000066400000000000000000000030241244507361200347470ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; /** * This is the garbage collector interface, garbage collector implementers * need to extends this class to remove the deleted ledgers. */ public interface GarbageCollector { /** * Do the garbage collector work * * @param garbageCleaner * cleaner used to clean selected garbages */ public abstract void gc(GarbageCleaner garbageCleaner); /** * A interface used to define customised garbage cleaner */ public interface GarbageCleaner { /** * Clean a specific ledger * * @param ledgerId * Ledger ID to be cleaned */ public void clean(final long ledgerId) ; } } GarbageCollectorThread.java000066400000000000000000000551301244507361200361040ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicBoolean; import com.google.common.util.concurrent.RateLimiter; import org.apache.bookkeeper.bookie.EntryLogger.EntryLogScanner; import org.apache.bookkeeper.bookie.GarbageCollector.GarbageCleaner; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.util.MathUtils; import org.apache.bookkeeper.util.SnapshotMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This is the garbage collector thread that runs in the background to * remove any entry log files that no longer contains any active ledger. */ public class GarbageCollectorThread extends Thread { private static final Logger LOG = LoggerFactory.getLogger(GarbageCollectorThread.class); private static final int SECOND = 1000; // Maps entry log files to the set of ledgers that comprise the file and the size usage per ledger private Map entryLogMetaMap = new ConcurrentHashMap(); // This is how often we want to run the Garbage Collector Thread (in milliseconds). final long gcWaitTime; // Compaction parameters boolean enableMinorCompaction = false; final double minorCompactionThreshold; final long minorCompactionInterval; boolean enableMajorCompaction = false; final double majorCompactionThreshold; final long majorCompactionInterval; long lastMinorCompactionTime; long lastMajorCompactionTime; final int maxOutstandingRequests; final int compactionRate; final CompactionScannerFactory scannerFactory; // Entry Logger Handle final EntryLogger entryLogger; // Ledger Cache Handle final LedgerCache ledgerCache; final SnapshotMap activeLedgers; // flag to ensure gc thread will not be interrupted during compaction // to reduce the risk getting entry log corrupted final AtomicBoolean compacting = new AtomicBoolean(false); volatile boolean running = true; // track the last scanned successfully log id long scannedLogId = 0; final GarbageCollector garbageCollector; final GarbageCleaner garbageCleaner; private static class Offset { final long ledger; final long entry; final long offset; Offset(long ledger, long entry, long offset) { this.ledger = ledger; this.entry = entry; this.offset = offset; } } /** * A scanner wrapper to check whether a ledger is alive in an entry log file */ class CompactionScannerFactory implements EntryLogger.EntryLogListener { List offsets = new ArrayList(); EntryLogScanner newScanner(final EntryLogMetadata meta) { final RateLimiter rateLimiter = RateLimiter.create(compactionRate); return new EntryLogScanner() { @Override public boolean accept(long ledgerId) { return meta.containsLedger(ledgerId); } @Override public void process(final long ledgerId, long offset, ByteBuffer entry) throws IOException { rateLimiter.acquire(); synchronized (CompactionScannerFactory.this) { if (offsets.size() > maxOutstandingRequests) { waitEntrylogFlushed(); } entry.getLong(); // discard ledger id, we already have it long entryId = entry.getLong(); entry.rewind(); long newoffset = entryLogger.addEntry(ledgerId, entry); flushed.set(false); offsets.add(new Offset(ledgerId, entryId, newoffset)); } } }; } AtomicBoolean flushed = new AtomicBoolean(false); Object flushLock = new Object(); @Override public void onEntryLogFlushed() { synchronized (flushLock) { flushed.set(true); flushLock.notifyAll(); } } synchronized private void waitEntrylogFlushed() throws IOException { try { synchronized (flushLock) { while (!flushed.get() && entryLogger.isFlushRequired() && running) { flushLock.wait(1000); } if (!flushed.get() && entryLogger.isFlushRequired() && !running) { throw new IOException("Shutdown before flushed"); } } } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new IOException("Interrupted waiting for flush", ie); } for (Offset o : offsets) { ledgerCache.putEntryOffset(o.ledger, o.entry, o.offset); } offsets.clear(); } synchronized void flush() throws IOException { waitEntrylogFlushed(); ledgerCache.flushLedger(true); } } /** * Create a garbage collector thread. * * @param conf * Server Configuration Object. * @throws IOException */ public GarbageCollectorThread(ServerConfiguration conf, final LedgerCache ledgerCache, EntryLogger entryLogger, SnapshotMap activeLedgers, LedgerManager ledgerManager) throws IOException { super("GarbageCollectorThread"); this.ledgerCache = ledgerCache; this.entryLogger = entryLogger; this.activeLedgers = activeLedgers; this.gcWaitTime = conf.getGcWaitTime(); this.maxOutstandingRequests = conf.getCompactionMaxOutstandingRequests(); this.compactionRate = conf.getCompactionRate(); this.scannerFactory = new CompactionScannerFactory(); entryLogger.addListener(this.scannerFactory); this.garbageCleaner = new GarbageCollector.GarbageCleaner() { @Override public void clean(long ledgerId) { try { if (LOG.isDebugEnabled()) { LOG.debug("delete ledger : " + ledgerId); } ledgerCache.deleteLedger(ledgerId); } catch (IOException e) { LOG.error("Exception when deleting the ledger index file on the Bookie: ", e); } } }; this.garbageCollector = new ScanAndCompareGarbageCollector(ledgerManager, activeLedgers); // compaction parameters minorCompactionThreshold = conf.getMinorCompactionThreshold(); minorCompactionInterval = conf.getMinorCompactionInterval() * SECOND; majorCompactionThreshold = conf.getMajorCompactionThreshold(); majorCompactionInterval = conf.getMajorCompactionInterval() * SECOND; if (minorCompactionInterval > 0 && minorCompactionThreshold > 0) { if (minorCompactionThreshold > 1.0f) { throw new IOException("Invalid minor compaction threshold " + minorCompactionThreshold); } if (minorCompactionInterval <= gcWaitTime) { throw new IOException("Too short minor compaction interval : " + minorCompactionInterval); } enableMinorCompaction = true; } if (majorCompactionInterval > 0 && majorCompactionThreshold > 0) { if (majorCompactionThreshold > 1.0f) { throw new IOException("Invalid major compaction threshold " + majorCompactionThreshold); } if (majorCompactionInterval <= gcWaitTime) { throw new IOException("Too short major compaction interval : " + majorCompactionInterval); } enableMajorCompaction = true; } if (enableMinorCompaction && enableMajorCompaction) { if (minorCompactionInterval >= majorCompactionInterval || minorCompactionThreshold >= majorCompactionThreshold) { throw new IOException("Invalid minor/major compaction settings : minor (" + minorCompactionThreshold + ", " + minorCompactionInterval + "), major (" + majorCompactionThreshold + ", " + majorCompactionInterval + ")"); } } LOG.info("Minor Compaction : enabled=" + enableMinorCompaction + ", threshold=" + minorCompactionThreshold + ", interval=" + minorCompactionInterval); LOG.info("Major Compaction : enabled=" + enableMajorCompaction + ", threshold=" + majorCompactionThreshold + ", interval=" + majorCompactionInterval); lastMinorCompactionTime = lastMajorCompactionTime = MathUtils.now(); } @Override public void run() { while (running) { synchronized (this) { try { wait(gcWaitTime); } catch (InterruptedException e) { Thread.currentThread().interrupt(); continue; } } // Extract all of the ledger ID's that comprise all of the entry logs // (except for the current new one which is still being written to). entryLogMetaMap = extractMetaFromEntryLogs(entryLogMetaMap); // gc inactive/deleted ledgers doGcLedgers(); // gc entry logs doGcEntryLogs(); long curTime = MathUtils.now(); if (enableMajorCompaction && curTime - lastMajorCompactionTime > majorCompactionInterval) { // enter major compaction LOG.info("Enter major compaction"); doCompactEntryLogs(majorCompactionThreshold); lastMajorCompactionTime = MathUtils.now(); // also move minor compaction time lastMinorCompactionTime = lastMajorCompactionTime; continue; } if (enableMinorCompaction && curTime - lastMinorCompactionTime > minorCompactionInterval) { // enter minor compaction LOG.info("Enter minor compaction"); doCompactEntryLogs(minorCompactionThreshold); lastMinorCompactionTime = MathUtils.now(); } } } /** * Do garbage collection ledger index files */ private void doGcLedgers() { garbageCollector.gc(garbageCleaner); } /** * Garbage collect those entry loggers which are not associated with any active ledgers */ private void doGcEntryLogs() { // Loop through all of the entry logs and remove the non-active ledgers. for (Long entryLogId : entryLogMetaMap.keySet()) { EntryLogMetadata meta = entryLogMetaMap.get(entryLogId); for (Long entryLogLedger : meta.ledgersMap.keySet()) { // Remove the entry log ledger from the set if it isn't active. if (!activeLedgers.containsKey(entryLogLedger)) { meta.removeLedger(entryLogLedger); } } if (meta.isEmpty()) { // This means the entry log is not associated with any active ledgers anymore. // We can remove this entry log file now. LOG.info("Deleting entryLogId " + entryLogId + " as it has no active ledgers!"); removeEntryLog(entryLogId); } } } /** * Compact entry logs if necessary. * *

* Compaction will be executed from low unused space to high unused space. * Those entry log files whose remaining size percentage is higher than threshold * would not be compacted. *

*/ private void doCompactEntryLogs(double threshold) { LOG.info("Do compaction to compact those files lower than " + threshold); // sort the ledger meta by occupied unused space Comparator sizeComparator = new Comparator() { @Override public int compare(EntryLogMetadata m1, EntryLogMetadata m2) { long unusedSize1 = m1.totalSize - m1.remainingSize; long unusedSize2 = m2.totalSize - m2.remainingSize; if (unusedSize1 > unusedSize2) { return -1; } else if (unusedSize1 < unusedSize2) { return 1; } else { return 0; } } }; List logsToCompact = new ArrayList(); logsToCompact.addAll(entryLogMetaMap.values()); Collections.sort(logsToCompact, sizeComparator); List toRemove = new ArrayList(); for (EntryLogMetadata meta : logsToCompact) { if (meta.getUsage() >= threshold) { break; } LOG.debug("Compacting entry log {} below threshold {}.", meta.entryLogId, threshold); try { compactEntryLog(scannerFactory, meta); toRemove.add(meta.entryLogId); } catch (LedgerDirsManager.NoWritableLedgerDirException nwlde) { LOG.warn("No writable ledger directory available, aborting compaction", nwlde); break; } catch (IOException ioe) { // if compact entry log throws IOException, we don't want to remove that // entry log. however, if some entries from that log have been readded // to the entry log, and the offset updated, it's ok to flush that LOG.error("Error compacting entry log. Log won't be deleted", ioe); } if (!running) { // if gc thread is not running, stop compaction return; } } try { // compaction finished, flush any outstanding offsets scannerFactory.flush(); } catch (IOException ioe) { LOG.error("Cannot flush compacted entries, skip removal", ioe); return; } // offsets have been flushed, its now safe to remove the old entrylogs for (Long l : toRemove) { removeEntryLog(l); } } /** * Shutdown the garbage collector thread. * * @throws InterruptedException if there is an exception stopping gc thread. */ public void shutdown() throws InterruptedException { this.running = false; if (compacting.compareAndSet(false, true)) { // if setting compacting flag succeed, means gcThread is not compacting now // it is safe to interrupt itself now this.interrupt(); } this.join(); } /** * Remove entry log. * * @param entryLogId * Entry Log File Id */ private void removeEntryLog(long entryLogId) { // remove entry log file successfully if (entryLogger.removeEntryLog(entryLogId)) { entryLogMetaMap.remove(entryLogId); } } /** * Compact an entry log. * * @param entryLogId * Entry Log File Id */ protected void compactEntryLog(CompactionScannerFactory scannerFactory, EntryLogMetadata entryLogMeta) throws IOException { // Similar with Sync Thread // try to mark compacting flag to make sure it would not be interrupted // by shutdown during compaction. otherwise it will receive // ClosedByInterruptException which may cause index file & entry logger // closed and corrupted. if (!compacting.compareAndSet(false, true)) { // set compacting flag failed, means compacting is true now // indicates another thread wants to interrupt gc thread to exit return; } LOG.info("Compacting entry log : {}", entryLogMeta.entryLogId); try { entryLogger.scanEntryLog(entryLogMeta.entryLogId, scannerFactory.newScanner(entryLogMeta)); } finally { // clear compacting flag compacting.set(false); } } /** * Records the total size, remaining size and the set of ledgers that comprise a entry log. */ static class EntryLogMetadata { long entryLogId; long totalSize; long remainingSize; ConcurrentHashMap ledgersMap; public EntryLogMetadata(long logId) { this.entryLogId = logId; totalSize = remainingSize = 0; ledgersMap = new ConcurrentHashMap(); } public void addLedgerSize(long ledgerId, long size) { totalSize += size; remainingSize += size; Long ledgerSize = ledgersMap.get(ledgerId); if (null == ledgerSize) { ledgerSize = 0L; } ledgerSize += size; ledgersMap.put(ledgerId, ledgerSize); } public void removeLedger(long ledgerId) { Long size = ledgersMap.remove(ledgerId); if (null == size) { return; } remainingSize -= size; } public boolean containsLedger(long ledgerId) { return ledgersMap.containsKey(ledgerId); } public double getUsage() { if (totalSize == 0L) { return 0.0f; } return (double)remainingSize / totalSize; } public boolean isEmpty() { return ledgersMap.isEmpty(); } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("{ totalSize = ").append(totalSize).append(", remainingSize = ") .append(remainingSize).append(", ledgersMap = ").append(ledgersMap).append(" }"); return sb.toString(); } } /** * A scanner used to extract entry log meta from entry log files. */ static class ExtractionScanner implements EntryLogScanner { EntryLogMetadata meta; public ExtractionScanner(EntryLogMetadata meta) { this.meta = meta; } @Override public boolean accept(long ledgerId) { return true; } @Override public void process(long ledgerId, long offset, ByteBuffer entry) { // add new entry size of a ledger to entry log meta meta.addLedgerSize(ledgerId, entry.limit() + 4); } } /** * Method to read in all of the entry logs (those that we haven't done so yet), * and find the set of ledger ID's that make up each entry log file. * * @param entryLogMetaMap * Existing EntryLogs to Meta * @throws IOException */ protected Map extractMetaFromEntryLogs(Map entryLogMetaMap) { // Extract it for every entry log except for the current one. // Entry Log ID's are just a long value that starts at 0 and increments // by 1 when the log fills up and we roll to a new one. long curLogId = entryLogger.getCurrentLogId(); boolean hasExceptionWhenScan = false; for (long entryLogId = scannedLogId; entryLogId < curLogId; entryLogId++) { // Comb the current entry log file if it has not already been extracted. if (entryLogMetaMap.containsKey(entryLogId)) { continue; } // check whether log file exists or not // if it doesn't exist, this log file might have been garbage collected. if (!entryLogger.logExists(entryLogId)) { continue; } LOG.info("Extracting entry log meta from entryLogId: {}", entryLogId); try { // Read through the entry log file and extract the entry log meta EntryLogMetadata entryLogMeta = extractMetaFromEntryLog(entryLogger, entryLogId); entryLogMetaMap.put(entryLogId, entryLogMeta); } catch (IOException e) { hasExceptionWhenScan = true; LOG.warn("Premature exception when processing " + entryLogId + " recovery will take care of the problem", e); } // if scan failed on some entry log, we don't move 'scannedLogId' to next id // if scan succeed, we don't need to scan it again during next gc run, // we move 'scannedLogId' to next id if (!hasExceptionWhenScan) { ++scannedLogId; } } return entryLogMetaMap; } static EntryLogMetadata extractMetaFromEntryLog(EntryLogger entryLogger, long entryLogId) throws IOException { EntryLogMetadata entryLogMeta = new EntryLogMetadata(entryLogId); ExtractionScanner scanner = new ExtractionScanner(entryLogMeta); // Read through the entry log file and extract the entry log meta entryLogger.scanEntryLog(entryLogId, scanner); LOG.debug("Retrieved entry log meta data entryLogId: {}, meta: {}", entryLogId, entryLogMeta); return entryLogMeta; } } HandleFactory.java000066400000000000000000000021601244507361200342730ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; interface HandleFactory { LedgerDescriptor getHandle(long ledgerId, byte[] masterKey) throws IOException, BookieException; LedgerDescriptor getReadOnlyHandle(long ledgerId) throws IOException, Bookie.NoLedgerException; }HandleFactoryImpl.java000066400000000000000000000043641244507361200351250ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.util.HashMap; class HandleFactoryImpl implements HandleFactory { HashMap ledgers = new HashMap(); HashMap readOnlyLedgers = new HashMap(); final LedgerStorage ledgerStorage; HandleFactoryImpl(LedgerStorage ledgerStorage) { this.ledgerStorage = ledgerStorage; } @Override public LedgerDescriptor getHandle(long ledgerId, byte[] masterKey) throws IOException, BookieException { LedgerDescriptor handle = null; synchronized (ledgers) { handle = ledgers.get(ledgerId); if (handle == null) { handle = LedgerDescriptor.create(masterKey, ledgerId, ledgerStorage); ledgers.put(ledgerId, handle); } handle.checkAccess(masterKey); } return handle; } @Override public LedgerDescriptor getReadOnlyHandle(long ledgerId) throws IOException, Bookie.NoLedgerException { LedgerDescriptor handle = null; synchronized (ledgers) { handle = readOnlyLedgers.get(ledgerId); if (handle == null) { handle = LedgerDescriptor.createReadOnly(ledgerId, ledgerStorage); readOnlyLedgers.put(ledgerId, handle); } } return handle; } }InterleavedLedgerStorage.java000066400000000000000000000131371244507361200364700ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.nio.ByteBuffer; import java.io.IOException; import org.apache.bookkeeper.jmx.BKMBeanInfo; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.proto.BookieProtocol; import org.apache.bookkeeper.util.SnapshotMap; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Interleave ledger storage * This ledger storage implementation stores all entries in a single * file and maintains an index file for each ledger. */ class InterleavedLedgerStorage implements LedgerStorage { final static Logger LOG = LoggerFactory.getLogger(InterleavedLedgerStorage.class); EntryLogger entryLogger; LedgerCache ledgerCache; // A sorted map to stored all active ledger ids protected final SnapshotMap activeLedgers; // This is the thread that garbage collects the entry logs that do not // contain any active ledgers in them; and compacts the entry logs that // has lower remaining percentage to reclaim disk space. final GarbageCollectorThread gcThread; InterleavedLedgerStorage(ServerConfiguration conf, LedgerManager ledgerManager, LedgerDirsManager ledgerDirsManager) throws IOException { activeLedgers = new SnapshotMap(); entryLogger = new EntryLogger(conf, ledgerDirsManager); ledgerCache = new LedgerCacheImpl(conf, activeLedgers, ledgerDirsManager); gcThread = new GarbageCollectorThread(conf, ledgerCache, entryLogger, activeLedgers, ledgerManager); } @Override public void start() { gcThread.start(); } @Override public void shutdown() throws InterruptedException { // shut down gc thread, which depends on zookeeper client // also compaction will write entries again to entry log file gcThread.shutdown(); entryLogger.shutdown(); try { ledgerCache.close(); } catch (IOException e) { LOG.error("Error while closing the ledger cache", e); } } @Override public boolean setFenced(long ledgerId) throws IOException { return ledgerCache.setFenced(ledgerId); } @Override public boolean isFenced(long ledgerId) throws IOException { return ledgerCache.isFenced(ledgerId); } @Override public void setMasterKey(long ledgerId, byte[] masterKey) throws IOException { ledgerCache.setMasterKey(ledgerId, masterKey); } @Override public byte[] readMasterKey(long ledgerId) throws IOException, BookieException { return ledgerCache.readMasterKey(ledgerId); } @Override public boolean ledgerExists(long ledgerId) throws IOException { return ledgerCache.ledgerExists(ledgerId); } @Override synchronized public long addEntry(ByteBuffer entry) throws IOException { long ledgerId = entry.getLong(); long entryId = entry.getLong(); entry.rewind(); /* * Log the entry */ long pos = entryLogger.addEntry(ledgerId, entry); /* * Set offset of entry id to be the current ledger position */ ledgerCache.putEntryOffset(ledgerId, entryId, pos); return entryId; } @Override public ByteBuffer getEntry(long ledgerId, long entryId) throws IOException { long offset; /* * If entryId is BookieProtocol.LAST_ADD_CONFIRMED, then return the last written. */ if (entryId == BookieProtocol.LAST_ADD_CONFIRMED) { entryId = ledgerCache.getLastEntry(ledgerId); } offset = ledgerCache.getEntryOffset(ledgerId, entryId); if (offset == 0) { throw new Bookie.NoEntryException(ledgerId, entryId); } return ByteBuffer.wrap(entryLogger.readEntry(ledgerId, entryId, offset)); } @Override public boolean isFlushRequired() { return entryLogger.isFlushRequired(); } @Override public void flush() throws IOException { if (!isFlushRequired()) { return; } boolean flushFailed = false; try { ledgerCache.flushLedger(true); } catch (IOException ioe) { LOG.error("Exception flushing Ledger cache", ioe); flushFailed = true; } try { entryLogger.flush(); } catch (IOException ioe) { LOG.error("Exception flushing Ledger", ioe); flushFailed = true; } if (flushFailed) { throw new IOException("Flushing to storage failed, check logs"); } } @Override public BKMBeanInfo getJMXBean() { return ledgerCache.getJMXBean(); } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/Journal.java000066400000000000000000000500121244507361200332400ustar00rootroot00000000000000/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Collections; import java.util.LinkedList; import java.util.List; import java.util.concurrent.LinkedBlockingQueue; import org.apache.bookkeeper.bookie.LedgerDirsManager.NoWritableLedgerDirException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.util.IOUtils; import org.apache.bookkeeper.util.MathUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Provide journal related management. */ class Journal extends Thread { static Logger LOG = LoggerFactory.getLogger(Journal.class); /** * Filter to pickup journals */ private static interface JournalIdFilter { public boolean accept(long journalId); } /** * List all journal ids by a specified journal id filer * * @param journalDir journal dir * @param filter journal id filter * @return list of filtered ids */ private static List listJournalIds(File journalDir, JournalIdFilter filter) { File logFiles[] = journalDir.listFiles(); List logs = new ArrayList(); for(File f: logFiles) { String name = f.getName(); if (!name.endsWith(".txn")) { continue; } String idString = name.split("\\.")[0]; long id = Long.parseLong(idString, 16); if (filter != null) { if (filter.accept(id)) { logs.add(id); } } else { logs.add(id); } } Collections.sort(logs); return logs; } /** * Last Log Mark */ class LastLogMark { private long txnLogId; private long txnLogPosition; private LastLogMark lastMark; LastLogMark(long logId, long logPosition) { this.txnLogId = logId; this.txnLogPosition = logPosition; } synchronized void setLastLogMark(long logId, long logPosition) { txnLogId = logId; txnLogPosition = logPosition; } synchronized void markLog() { lastMark = new LastLogMark(txnLogId, txnLogPosition); } synchronized LastLogMark getLastMark() { return lastMark; } synchronized long getTxnLogId() { return txnLogId; } synchronized long getTxnLogPosition() { return txnLogPosition; } synchronized void rollLog() throws NoWritableLedgerDirException { byte buff[] = new byte[16]; ByteBuffer bb = ByteBuffer.wrap(buff); // we should record marked in markLog // which is safe since records before lastMark have been // persisted to disk (both index & entry logger) bb.putLong(lastMark.getTxnLogId()); bb.putLong(lastMark.getTxnLogPosition()); LOG.debug("RollLog to persist last marked log : {}", lastMark); List writableLedgerDirs = ledgerDirsManager .getWritableLedgerDirs(); for (File dir : writableLedgerDirs) { File file = new File(dir, "lastMark"); FileOutputStream fos = null; try { fos = new FileOutputStream(file); fos.write(buff); fos.getChannel().force(true); fos.close(); fos = null; } catch (IOException e) { LOG.error("Problems writing to " + file, e); } finally { // if stream already closed in try block successfully, // stream might have nullified, in such case below // call will simply returns IOUtils.close(LOG, fos); } } } /** * Read last mark from lastMark file. * The last mark should first be max journal log id, * and then max log position in max journal log. */ synchronized void readLog() { byte buff[] = new byte[16]; ByteBuffer bb = ByteBuffer.wrap(buff); for(File dir: ledgerDirsManager.getAllLedgerDirs()) { File file = new File(dir, "lastMark"); try { FileInputStream fis = new FileInputStream(file); try { int bytesRead = fis.read(buff); if (bytesRead != 16) { throw new IOException("Couldn't read enough bytes from lastMark." + " Wanted " + 16 + ", got " + bytesRead); } } finally { fis.close(); } bb.clear(); long i = bb.getLong(); long p = bb.getLong(); if (i > txnLogId) { txnLogId = i; if(p > txnLogPosition) { txnLogPosition = p; } } } catch (IOException e) { LOG.error("Problems reading from " + file + " (this is okay if it is the first time starting this bookie"); } } } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("LastMark: logId - ").append(txnLogId) .append(" , position - ").append(txnLogPosition); return sb.toString(); } } /** * Filter to return list of journals for rolling */ private class JournalRollingFilter implements JournalIdFilter { @Override public boolean accept(long journalId) { if (journalId < lastLogMark.getLastMark().getTxnLogId()) { return true; } else { return false; } } } /** * Scanner used to scan a journal */ public static interface JournalScanner { /** * Process a journal entry. * * @param journalVersion * Journal Version * @param offset * File offset of the journal entry * @param entry * Journal Entry * @throws IOException */ public void process(int journalVersion, long offset, ByteBuffer entry) throws IOException; } /** * Journal Entry to Record */ private static class QueueEntry { QueueEntry(ByteBuffer entry, long ledgerId, long entryId, WriteCallback cb, Object ctx) { this.entry = entry.duplicate(); this.cb = cb; this.ctx = ctx; this.ledgerId = ledgerId; this.entryId = entryId; } ByteBuffer entry; long ledgerId; long entryId; WriteCallback cb; Object ctx; } final static long MB = 1024 * 1024L; // max journal file size final long maxJournalSize; // number journal files kept before marked journal final int maxBackupJournals; final File journalDirectory; final ServerConfiguration conf; private LastLogMark lastLogMark = new LastLogMark(0, 0); // journal entry queue to commit LinkedBlockingQueue queue = new LinkedBlockingQueue(); volatile boolean running = true; private LedgerDirsManager ledgerDirsManager; public Journal(ServerConfiguration conf, LedgerDirsManager ledgerDirsManager) { super("BookieJournal-" + conf.getBookiePort()); this.ledgerDirsManager = ledgerDirsManager; this.conf = conf; this.journalDirectory = Bookie.getCurrentDirectory(conf.getJournalDir()); this.maxJournalSize = conf.getMaxJournalSize() * MB; this.maxBackupJournals = conf.getMaxBackupJournals(); // read last log mark lastLogMark.readLog(); LOG.debug("Last Log Mark : {}", lastLogMark); } LastLogMark getLastLogMark() { return lastLogMark; } /** * Records a LastLogMark in memory. * *

* The LastLogMark contains two parts: first one is txnLogId * (file id of a journal) and the second one is txnLogPos (offset in * a journal). The LastLogMark indicates that those entries before * it have been persisted to both index and entry log files. *

* *

* This method is called before flushing entry log files and ledger cache. *

*/ public void markLog() { lastLogMark.markLog(); } /** * Persists the LastLogMark marked by #markLog() to disk. * *

* This action means entries added before LastLogMark whose entry data * and index pages were already persisted to disk. It is the time to safely * remove journal files created earlier than LastLogMark.txnLogId. *

*

* If the bookie has crashed before persisting LastLogMark to disk, * it still has journal files contains entries for which index pages may not * have been persisted. Consequently, when the bookie restarts, it inspects * journal files to restore those entries; data isn't lost. *

*

* This method is called after flushing entry log files and ledger cache successfully, which is to ensure LastLogMark is pesisted. *

* @see #markLog() */ public void rollLog() throws NoWritableLedgerDirException { lastLogMark.rollLog(); } /** * Garbage collect older journals */ public void gcJournals() { // list the journals that have been marked List logs = listJournalIds(journalDirectory, new JournalRollingFilter()); // keep MAX_BACKUP_JOURNALS journal files before marked journal if (logs.size() >= maxBackupJournals) { int maxIdx = logs.size() - maxBackupJournals; for (int i=0; i logs = listJournalIds(journalDirectory, new JournalIdFilter() { @Override public boolean accept(long journalId) { if (journalId < markedLogId) { return false; } return true; } }); // last log mark may be missed due to no sync up before // validate filtered log ids only when we have markedLogId if (markedLogId > 0) { if (logs.size() == 0 || logs.get(0) != markedLogId) { throw new IOException("Recovery log " + markedLogId + " is missing"); } } LOG.debug("Try to relay journal logs : {}", logs); // TODO: When reading in the journal logs that need to be synced, we // should use BufferedChannels instead to minimize the amount of // system calls done. for(Long id: logs) { long logPosition = 0L; if(id == markedLogId) { logPosition = lastLogMark.getTxnLogPosition(); } LOG.info("Replaying journal {} from position {}", id, logPosition); scanJournal(id, logPosition, scanner); } } /** * record an add entry operation in journal */ public void logAddEntry(ByteBuffer entry, WriteCallback cb, Object ctx) { long ledgerId = entry.getLong(); long entryId = entry.getLong(); entry.rewind(); queue.add(new QueueEntry(entry, ledgerId, entryId, cb, ctx)); } /** * Get the length of journal entries queue. * * @return length of journal entry queue. */ public int getJournalQueueLength() { return queue.size(); } /** * A thread used for persisting journal entries to journal files. * *

* Besides persisting journal entries, it also takes responsibility of * rolling journal files when a journal file reaches journal file size * limitation. *

*

* During journal rolling, it first closes the writing journal, generates * new journal file using current timestamp, and continue persistence logic. * Those journals will be garbage collected in SyncThread. *

* @see Bookie#SyncThread */ @Override public void run() { LinkedList toFlush = new LinkedList(); ByteBuffer lenBuff = ByteBuffer.allocate(4); JournalChannel logFile = null; try { List journalIds = listJournalIds(journalDirectory, null); // Should not use MathUtils.now(), which use System.nanoTime() and // could only be used to measure elapsed time. // http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime%28%29 long logId = journalIds.isEmpty() ? System.currentTimeMillis() : journalIds.get(journalIds.size() - 1); BufferedChannel bc = null; long lastFlushPosition = 0; QueueEntry qe = null; while (true) { // new journal file to write if (null == logFile) { logId = logId + 1; logFile = new JournalChannel(journalDirectory, logId); bc = logFile.getBufferedChannel(); lastFlushPosition = 0; } if (qe == null) { if (toFlush.isEmpty()) { qe = queue.take(); } else { qe = queue.poll(); if (qe == null || bc.position() > lastFlushPosition + 512*1024) { //logFile.force(false); bc.flush(true); lastFlushPosition = bc.position(); lastLogMark.setLastLogMark(logId, lastFlushPosition); for (QueueEntry e : toFlush) { e.cb.writeComplete(BookieException.Code.OK, e.ledgerId, e.entryId, null, e.ctx); } toFlush.clear(); // check whether journal file is over file limit if (bc.position() > maxJournalSize) { logFile.close(); logFile = null; continue; } } } } if (!running) { LOG.info("Journal Manager is asked to shut down, quit."); break; } if (qe == null) { // no more queue entry continue; } lenBuff.clear(); lenBuff.putInt(qe.entry.remaining()); lenBuff.flip(); // // we should be doing the following, but then we run out of // direct byte buffers // logFile.write(new ByteBuffer[] { lenBuff, qe.entry }); bc.write(lenBuff); bc.write(qe.entry); logFile.preAllocIfNeeded(); toFlush.add(qe); qe = null; } logFile.close(); logFile = null; } catch (IOException ioe) { LOG.error("I/O exception in Journal thread!", ioe); } catch (InterruptedException ie) { LOG.warn("Journal exits when shutting down", ie); } finally { IOUtils.close(LOG, logFile); } } /** * Shuts down the journal. */ public synchronized void shutdown() { try { if (!running) { return; } running = false; this.interrupt(); this.join(); } catch (InterruptedException ie) { LOG.warn("Interrupted during shutting down journal : ", ie); } } private static int fullRead(JournalChannel fc, ByteBuffer bb) throws IOException { int total = 0; while(bb.remaining() > 0) { int rc = fc.read(bb); if (rc <= 0) { return total; } total += rc; } return total; } } JournalChannel.java000066400000000000000000000125441244507361200344620ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.util.Arrays; import java.io.Closeable; import java.io.File; import java.io.RandomAccessFile; import java.io.IOException; import java.nio.channels.FileChannel; import java.nio.ByteBuffer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Simple wrapper around FileChannel to add versioning * information to the file. */ class JournalChannel implements Closeable { static Logger LOG = LoggerFactory.getLogger(JournalChannel.class); final FileChannel fc; final BufferedChannel bc; final int formatVersion; long nextPrealloc = 0; final byte[] MAGIC_WORD = "BKLG".getBytes(); private final static int START_OF_FILE = -12345; int HEADER_SIZE = 8; // 4byte magic word, 4 byte version int MIN_COMPAT_JOURNAL_FORMAT_VERSION = 1; int CURRENT_JOURNAL_FORMAT_VERSION = 4; public final static long preAllocSize = 4*1024*1024; public final static ByteBuffer zeros = ByteBuffer.allocate(512); JournalChannel(File journalDirectory, long logId) throws IOException { this(journalDirectory, logId, START_OF_FILE); } JournalChannel(File journalDirectory, long logId, long position) throws IOException { File fn = new File(journalDirectory, Long.toHexString(logId) + ".txn"); LOG.info("Opening journal {}", fn); if (!fn.exists()) { // new file, write version if (!fn.createNewFile()) { LOG.error("Journal file {}, that shouldn't exist, already exists. " + " is there another bookie process running?", fn); throw new IOException("File " + fn + " suddenly appeared, is another bookie process running?"); } fc = new RandomAccessFile(fn, "rw").getChannel(); formatVersion = CURRENT_JOURNAL_FORMAT_VERSION; ByteBuffer bb = ByteBuffer.allocate(HEADER_SIZE); bb.put(MAGIC_WORD); bb.putInt(formatVersion); bb.flip(); fc.write(bb); fc.force(true); bc = new BufferedChannel(fc, 65536); nextPrealloc = preAllocSize; fc.write(zeros, nextPrealloc); } else { // open an existing file fc = new RandomAccessFile(fn, "r").getChannel(); bc = null; // readonly ByteBuffer bb = ByteBuffer.allocate(HEADER_SIZE); int c = fc.read(bb); bb.flip(); if (c == HEADER_SIZE) { byte[] first4 = new byte[4]; bb.get(first4); if (Arrays.equals(first4, MAGIC_WORD)) { formatVersion = bb.getInt(); } else { // pre magic word journal, reset to 0; formatVersion = 1; } } else { // no header, must be old version formatVersion = 1; } if (formatVersion < MIN_COMPAT_JOURNAL_FORMAT_VERSION || formatVersion > CURRENT_JOURNAL_FORMAT_VERSION) { String err = String.format("Invalid journal version, unable to read." + " Expected between (%d) and (%d), got (%d)", MIN_COMPAT_JOURNAL_FORMAT_VERSION, CURRENT_JOURNAL_FORMAT_VERSION, formatVersion); LOG.error(err); throw new IOException(err); } try { if (position == START_OF_FILE) { if (formatVersion >= 2) { fc.position(HEADER_SIZE); } else { fc.position(0); } } else { fc.position(position); } } catch (IOException e) { LOG.error("Bookie journal file can seek to position :", e); } } } int getFormatVersion() { return formatVersion; } BufferedChannel getBufferedChannel() throws IOException { if (bc == null) { throw new IOException("Read only journal channel"); } return bc; } void preAllocIfNeeded() throws IOException { if (bc.position() > nextPrealloc) { nextPrealloc = ((fc.size() + HEADER_SIZE) / preAllocSize + 1) * preAllocSize; zeros.clear(); fc.write(zeros, nextPrealloc); } } int read(ByteBuffer dst) throws IOException { return fc.read(dst); } public void close() throws IOException { fc.close(); } } LedgerCache.java000066400000000000000000000034451244507361200337050ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.Closeable; import java.io.IOException; /** * This class maps a ledger entry number into a location (entrylogid, offset) in * an entry log file. It does user level caching to more efficiently manage disk * head scheduling. */ interface LedgerCache extends Closeable { boolean setFenced(long ledgerId) throws IOException; boolean isFenced(long ledgerId) throws IOException; void setMasterKey(long ledgerId, byte[] masterKey) throws IOException; byte[] readMasterKey(long ledgerId) throws IOException, BookieException; boolean ledgerExists(long ledgerId) throws IOException; void putEntryOffset(long ledger, long entry, long offset) throws IOException; long getEntryOffset(long ledger, long entry) throws IOException; void flushLedger(boolean doAll) throws IOException; long getLastEntry(long ledgerId) throws IOException; void deleteLedger(long ledgerId) throws IOException; LedgerCacheBean getJMXBean(); } LedgerCacheBean.java000066400000000000000000000017471244507361200344760ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.bookie; import org.apache.bookkeeper.jmx.BKMBeanInfo; /** * Ledger Cache Bean */ public interface LedgerCacheBean extends LedgerCacheMXBean, BKMBeanInfo { } LedgerCacheImpl.java000066400000000000000000001017531244507361200345300ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; import java.util.Iterator; import java.util.LinkedList; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.concurrent.atomic.AtomicBoolean; import com.google.common.annotations.VisibleForTesting; import org.apache.bookkeeper.util.SnapshotMap; import org.apache.bookkeeper.bookie.LedgerDirsManager.LedgerDirsListener; import org.apache.bookkeeper.bookie.LedgerDirsManager.NoWritableLedgerDirException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Implementation of LedgerCache interface. * This class serves two purposes. */ public class LedgerCacheImpl implements LedgerCache { private final static Logger LOG = LoggerFactory.getLogger(LedgerCacheImpl.class); private static final String IDX = ".idx"; static final String RLOC = ".rloc"; private LedgerDirsManager ledgerDirsManager; final private AtomicBoolean shouldRelocateIndexFile = new AtomicBoolean(false); public LedgerCacheImpl(ServerConfiguration conf, SnapshotMap activeLedgers, LedgerDirsManager ledgerDirsManager) throws IOException { this.ledgerDirsManager = ledgerDirsManager; this.openFileLimit = conf.getOpenFileLimit(); this.pageSize = conf.getPageSize(); this.entriesPerPage = pageSize / 8; if (conf.getPageLimit() <= 0) { // allocate half of the memory to the page cache this.pageLimit = (int)((Runtime.getRuntime().maxMemory() / 3) / this.pageSize); } else { this.pageLimit = conf.getPageLimit(); } LOG.info("maxMemory = " + Runtime.getRuntime().maxMemory()); LOG.info("openFileLimit is " + openFileLimit + ", pageSize is " + pageSize + ", pageLimit is " + pageLimit); this.activeLedgers = activeLedgers; // Retrieve all of the active ledgers. getActiveLedgers(); ledgerDirsManager.addLedgerDirsListener(getLedgerDirsListener()); } /** * the list of potentially clean ledgers */ LinkedList cleanLedgers = new LinkedList(); /** * the list of potentially dirty ledgers */ LinkedList dirtyLedgers = new LinkedList(); HashMap fileInfoCache = new HashMap(); LinkedList openLedgers = new LinkedList(); // Manage all active ledgers in LedgerManager // so LedgerManager has knowledge to garbage collect inactive/deleted ledgers final SnapshotMap activeLedgers; final int openFileLimit; final int pageSize; final int pageLimit; final int entriesPerPage; /** * @return page size used in ledger cache */ public int getPageSize() { return pageSize; } /** * @return entries per page used in ledger cache */ public int getEntriesPerPage() { return entriesPerPage; } /** * @return page limitation in ledger cache */ public int getPageLimit() { return pageLimit; } // The number of pages that have actually been used private int pageCount = 0; HashMap> pages = new HashMap>(); /** * @return number of page used in ledger cache */ public int getNumUsedPages() { return pageCount; } private void putIntoTable(HashMap> table, LedgerEntryPage lep) { HashMap map = table.get(lep.getLedger()); if (map == null) { map = new HashMap(); table.put(lep.getLedger(), map); } map.put(lep.getFirstEntry(), lep); } private static LedgerEntryPage getFromTable(HashMap> table, Long ledger, Long firstEntry) { HashMap map = table.get(ledger); if (map != null) { return map.get(firstEntry); } return null; } synchronized protected LedgerEntryPage getLedgerEntryPage(Long ledger, Long firstEntry, boolean onlyDirty) { LedgerEntryPage lep = getFromTable(pages, ledger, firstEntry); if (lep == null) { return null; } lep.usePage(); if (onlyDirty && lep.isClean()) { return null; } else { return lep; } } /** * Grab ledger entry page whose first entry is pageEntry. * * If the page doesn't existed before, we allocate a memory page. * Otherwise, we grab a clean page and read it from disk. * * @param ledger * Ledger Id * @param pageEntry * Start entry of this entry page. */ private LedgerEntryPage grabLedgerEntryPage(long ledger, long pageEntry) throws IOException { LedgerEntryPage lep = grabCleanPage(ledger, pageEntry); try { // should update page before we put it into table // otherwise we would put an empty page in it updatePage(lep); synchronized(this) { putIntoTable(pages, lep); } } catch (IOException ie) { // if we grab a clean page, but failed to update the page // we are exhausting the count of ledger entry pages. // since this page will be never used, so we need to decrement // page count of ledger cache. lep.releasePage(); synchronized (this) { --pageCount; } throw ie; } return lep; } @Override public void putEntryOffset(long ledger, long entry, long offset) throws IOException { int offsetInPage = (int) (entry % entriesPerPage); // find the id of the first entry of the page that has the entry // we are looking for long pageEntry = entry-offsetInPage; LedgerEntryPage lep = getLedgerEntryPage(ledger, pageEntry, false); if (lep == null) { lep = grabLedgerEntryPage(ledger, pageEntry); } if (lep != null) { lep.setOffset(offset, offsetInPage*8); lep.releasePage(); return; } } @Override public long getEntryOffset(long ledger, long entry) throws IOException { int offsetInPage = (int) (entry%entriesPerPage); // find the id of the first entry of the page that has the entry // we are looking for long pageEntry = entry-offsetInPage; LedgerEntryPage lep = getLedgerEntryPage(ledger, pageEntry, false); try { if (lep == null) { lep = grabLedgerEntryPage(ledger, pageEntry); } return lep.getOffset(offsetInPage*8); } finally { if (lep != null) { lep.releasePage(); } } } @VisibleForTesting public static final String getLedgerName(long ledgerId) { int parent = (int) (ledgerId & 0xff); int grandParent = (int) ((ledgerId & 0xff00) >> 8); StringBuilder sb = new StringBuilder(); sb.append(Integer.toHexString(grandParent)); sb.append('/'); sb.append(Integer.toHexString(parent)); sb.append('/'); sb.append(Long.toHexString(ledgerId)); sb.append(IDX); return sb.toString(); } FileInfo getFileInfo(Long ledger, byte masterKey[]) throws IOException { synchronized(fileInfoCache) { FileInfo fi = fileInfoCache.get(ledger); if (fi == null) { File lf = findIndexFile(ledger); if (lf == null) { if (masterKey == null) { throw new Bookie.NoLedgerException(ledger); } lf = getNewLedgerIndexFile(ledger, null); // A new ledger index file has been created for this Bookie. // Add this new ledger to the set of active ledgers. LOG.debug("New ledger index file created for ledgerId: {}", ledger); activeLedgers.put(ledger, true); } evictFileInfoIfNecessary(); fi = new FileInfo(lf, masterKey); fileInfoCache.put(ledger, fi); openLedgers.add(ledger); } if (fi != null) { fi.use(); } return fi; } } /** * Get a new index file for ledger excluding directory excludedDir. * * @param ledger * Ledger id. * @param excludedDir * The ledger directory to exclude. * @return new index file object. * @throws NoWritableLedgerDirException if there is no writable dir available. */ private File getNewLedgerIndexFile(Long ledger, File excludedDir) throws NoWritableLedgerDirException { File dir = ledgerDirsManager.pickRandomWritableDir(excludedDir); String ledgerName = getLedgerName(ledger); return new File(dir, ledgerName); } private void updatePage(LedgerEntryPage lep) throws IOException { if (!lep.isClean()) { throw new IOException("Trying to update a dirty page"); } FileInfo fi = null; try { fi = getFileInfo(lep.getLedger(), null); long pos = lep.getFirstEntry()*8; if (pos >= fi.size()) { lep.zeroPage(); } else { lep.readPage(fi); } } finally { if (fi != null) { fi.release(); } } } private LedgerDirsListener getLedgerDirsListener() { return new LedgerDirsListener() { @Override public void diskFull(File disk) { // If the current entry log disk is full, then create new entry // log. shouldRelocateIndexFile.set(true); } @Override public void diskFailed(File disk) { // Nothing to handle here. Will be handled in Bookie } @Override public void allDisksFull() { // Nothing to handle here. Will be handled in Bookie } @Override public void fatalError() { // Nothing to handle here. Will be handled in Bookie } }; } @Override public void flushLedger(boolean doAll) throws IOException { synchronized(dirtyLedgers) { if (dirtyLedgers.isEmpty()) { synchronized(this) { for(Long l: pages.keySet()) { if (LOG.isTraceEnabled()) { LOG.trace("Adding {} to dirty pages", Long.toHexString(l)); } dirtyLedgers.add(l); } } } if (dirtyLedgers.isEmpty()) { return; } if (shouldRelocateIndexFile.get()) { // if some new dir detected as full, then move all corresponding // open index files to new location for (Long l : dirtyLedgers) { FileInfo fi = null; try { fi = getFileInfo(l, null); File currentDir = getLedgerDirForLedger(fi); if (ledgerDirsManager.isDirFull(currentDir)) { moveLedgerIndexFile(l, fi); } } finally { if (null != fi) { fi.release(); } } } shouldRelocateIndexFile.set(false); } while(!dirtyLedgers.isEmpty()) { Long l = dirtyLedgers.removeFirst(); flushLedger(l); if (!doAll) { break; } // Yield. if we are doing all the ledgers we don't want to block other flushes that // need to happen try { dirtyLedgers.wait(1); } catch (InterruptedException e) { // just pass it on Thread.currentThread().interrupt(); } } } } /** * Get the ledger directory that the ledger index belongs to. * * @param fi File info of a ledger * @return ledger directory that the ledger belongs to. */ private File getLedgerDirForLedger(FileInfo fi) { return fi.getLf().getParentFile().getParentFile().getParentFile(); } private void moveLedgerIndexFile(Long l, FileInfo fi) throws NoWritableLedgerDirException, IOException { File newLedgerIndexFile = getNewLedgerIndexFile(l, getLedgerDirForLedger(fi)); fi.moveToNewLocation(newLedgerIndexFile, fi.getSizeSinceLastwrite()); } /** * Flush a specified ledger * * @param l * Ledger Id * @throws IOException */ private void flushLedger(long l) throws IOException { FileInfo fi = null; try { fi = getFileInfo(l, null); flushLedger(l, fi); } catch (Bookie.NoLedgerException nle) { // ledger has been deleted } finally { if (null != fi) { fi.release(); } } } private void flushLedger(long l, FileInfo fi) throws IOException { LinkedList firstEntryList; synchronized(this) { HashMap pageMap = pages.get(l); if (pageMap == null || pageMap.isEmpty()) { fi.flushHeader(); return; } firstEntryList = new LinkedList(); for(Map.Entry entry: pageMap.entrySet()) { LedgerEntryPage lep = entry.getValue(); if (lep.isClean()) { LOG.trace("Page is clean {}", lep); continue; } firstEntryList.add(lep.getFirstEntry()); } } if (firstEntryList.size() == 0) { LOG.debug("Nothing to flush for ledger {}.", l); // nothing to do return; } // Now flush all the pages of a ledger List entries = new ArrayList(firstEntryList.size()); try { for(Long firstEntry: firstEntryList) { LedgerEntryPage lep = getLedgerEntryPage(l, firstEntry, true); if (lep != null) { entries.add(lep); } } Collections.sort(entries, new Comparator() { @Override public int compare(LedgerEntryPage o1, LedgerEntryPage o2) { return (int)(o1.getFirstEntry()-o2.getFirstEntry()); } }); ArrayList versions = new ArrayList(entries.size()); // flush the header if necessary fi.flushHeader(); int start = 0; long lastOffset = -1; for(int i = 0; i < entries.size(); i++) { versions.add(i, entries.get(i).getVersion()); if (lastOffset != -1 && (entries.get(i).getFirstEntry() - lastOffset) != entriesPerPage) { // send up a sequential list int count = i - start; if (count == 0) { LOG.warn("Count cannot possibly be zero!"); } writeBuffers(l, entries, fi, start, count); start = i; } lastOffset = entries.get(i).getFirstEntry(); } if (entries.size()-start == 0 && entries.size() != 0) { LOG.warn("Nothing to write, but there were entries!"); } writeBuffers(l, entries, fi, start, entries.size()-start); synchronized(this) { for(int i = 0; i < entries.size(); i++) { LedgerEntryPage lep = entries.get(i); lep.setClean(versions.get(i)); } } } finally { for(LedgerEntryPage lep: entries) { lep.releasePage(); } } } private void writeBuffers(Long ledger, List entries, FileInfo fi, int start, int count) throws IOException { if (LOG.isTraceEnabled()) { LOG.trace("Writing {} buffers of {}", count, Long.toHexString(ledger)); } if (count == 0) { return; } ByteBuffer buffs[] = new ByteBuffer[count]; for(int j = 0; j < count; j++) { buffs[j] = entries.get(start+j).getPageToWrite(); if (entries.get(start+j).getLedger() != ledger) { throw new IOException("Writing to " + ledger + " but page belongs to " + entries.get(start+j).getLedger()); } } long totalWritten = 0; while(buffs[buffs.length-1].remaining() > 0) { long rc = fi.write(buffs, entries.get(start+0).getFirstEntry()*8); if (rc <= 0) { throw new IOException("Short write to ledger " + ledger + " rc = " + rc); } totalWritten += rc; } if (totalWritten != (long)count * (long)pageSize) { throw new IOException("Short write to ledger " + ledger + " wrote " + totalWritten + " expected " + count * pageSize); } } private LedgerEntryPage grabCleanPage(long ledger, long entry) throws IOException { if (entry % entriesPerPage != 0) { throw new IllegalArgumentException(entry + " is not a multiple of " + entriesPerPage); } outerLoop: while(true) { synchronized(this) { if (pageCount < pageLimit) { // let's see if we can allocate something LedgerEntryPage lep = new LedgerEntryPage(pageSize, entriesPerPage); lep.setLedger(ledger); lep.setFirstEntry(entry); // note, this will not block since it is a new page lep.usePage(); pageCount++; return lep; } } synchronized(cleanLedgers) { if (cleanLedgers.isEmpty()) { flushLedger(false); synchronized(this) { for(Long l: pages.keySet()) { cleanLedgers.add(l); } } } synchronized(this) { // if ledgers deleted between checking pageCount and putting // ledgers into cleanLedgers list, the cleanLedgers list would be empty. // so give it a chance to go back to check pageCount again because // deleteLedger would decrement pageCount to return the number of pages // occupied by deleted ledgers. if (cleanLedgers.isEmpty()) { continue outerLoop; } Long cleanLedger = cleanLedgers.getFirst(); Map map = pages.get(cleanLedger); while (map == null || map.isEmpty()) { cleanLedgers.removeFirst(); if (cleanLedgers.isEmpty()) { continue outerLoop; } cleanLedger = cleanLedgers.getFirst(); map = pages.get(cleanLedger); } Iterator> it = map.entrySet().iterator(); LedgerEntryPage lep = it.next().getValue(); while((lep.inUse() || !lep.isClean())) { if (!it.hasNext()) { // no clean page found in this ledger cleanLedgers.removeFirst(); continue outerLoop; } lep = it.next().getValue(); } it.remove(); if (map.isEmpty()) { pages.remove(lep.getLedger()); } lep.usePage(); lep.zeroPage(); lep.setLedger(ledger); lep.setFirstEntry(entry); return lep; } } } } @Override public long getLastEntry(long ledgerId) throws IOException { long lastEntry = 0; // Find the last entry in the cache synchronized(this) { Map map = pages.get(ledgerId); if (map != null) { for(LedgerEntryPage lep: map.values()) { if (lep.getFirstEntry() + entriesPerPage < lastEntry) { continue; } lep.usePage(); long highest = lep.getLastEntry(); if (highest > lastEntry) { lastEntry = highest; } lep.releasePage(); } } } FileInfo fi = null; try { fi = getFileInfo(ledgerId, null); long size = fi.size(); // make sure the file size is aligned with index entry size // otherwise we may read incorret data if (0 != size % 8) { LOG.warn("Index file of ledger {} is not aligned with index entry size.", ledgerId); size = size - size % 8; } // we may not have the last entry in the cache if (size > lastEntry*8) { ByteBuffer bb = ByteBuffer.allocate(getPageSize()); long position = size - getPageSize(); if (position < 0) { position = 0; } fi.read(bb, position); bb.flip(); long startingEntryId = position/8; for(int i = getEntriesPerPage()-1; i >= 0; i--) { if (bb.getLong(i*8) != 0) { if (lastEntry < startingEntryId+i) { lastEntry = startingEntryId+i; } break; } } } } finally { if (fi != null) { fi.release(); } } return lastEntry; } /** * This method will look within the ledger directories for the ledger index * files. That will comprise the set of active ledgers this particular * BookieServer knows about that have not yet been deleted by the BookKeeper * Client. This is called only once during initialization. */ private void getActiveLedgers() throws IOException { // Ledger index files are stored in a file hierarchy with a parent and // grandParent directory. We'll have to go two levels deep into these // directories to find the index files. for (File ledgerDirectory : ledgerDirsManager.getAllLedgerDirs()) { for (File grandParent : ledgerDirectory.listFiles()) { if (grandParent.isDirectory()) { for (File parent : grandParent.listFiles()) { if (parent.isDirectory()) { for (File index : parent.listFiles()) { if (!index.isFile() || (!index.getName().endsWith(IDX) && !index.getName().endsWith(RLOC))) { continue; } // We've found a ledger index file. The file // name is the HexString representation of the // ledgerId. String ledgerIdInHex = index.getName().replace(RLOC, "").replace(IDX, ""); if (index.getName().endsWith(RLOC)) { if (findIndexFile(Long.parseLong(ledgerIdInHex)) != null) { if (!index.delete()) { LOG.warn("Deleting the rloc file " + index + " failed"); } continue; } else { File dest = new File(index.getParentFile(), ledgerIdInHex + IDX); if (!index.renameTo(dest)) { throw new IOException("Renaming rloc file " + index + " to index file has failed"); } } } activeLedgers.put(Long.parseLong(ledgerIdInHex, 16), true); } } } } } } } /** * This method is called whenever a ledger is deleted by the BookKeeper Client * and we want to remove all relevant data for it stored in the LedgerCache. */ @Override public void deleteLedger(long ledgerId) throws IOException { LOG.debug("Deleting ledgerId: {}", ledgerId); // remove pages first to avoid page flushed when deleting file info synchronized(this) { Map lpages = pages.remove(ledgerId); if (null != lpages) { pageCount -= lpages.size(); if (pageCount < 0) { LOG.error("Page count of ledger cache has been decremented to be less than zero."); } } } // Delete the ledger's index file and close the FileInfo FileInfo fi = null; try { fi = getFileInfo(ledgerId, null); fi.close(false); fi.delete(); } finally { // should release use count // otherwise the file channel would not be closed. if (null != fi) { fi.release(); } } // Remove it from the active ledger manager activeLedgers.remove(ledgerId); // Now remove it from all the other lists and maps. // These data structures need to be synchronized first before removing entries. synchronized(fileInfoCache) { fileInfoCache.remove(ledgerId); } synchronized(cleanLedgers) { cleanLedgers.remove(ledgerId); } synchronized(dirtyLedgers) { dirtyLedgers.remove(ledgerId); } synchronized(openLedgers) { openLedgers.remove(ledgerId); } } private File findIndexFile(long ledgerId) throws IOException { String ledgerName = getLedgerName(ledgerId); for (File d : ledgerDirsManager.getAllLedgerDirs()) { File lf = new File(d, ledgerName); if (lf.exists()) { return lf; } } return null; } @Override public byte[] readMasterKey(long ledgerId) throws IOException, BookieException { synchronized(fileInfoCache) { FileInfo fi = fileInfoCache.get(ledgerId); if (fi == null) { File lf = findIndexFile(ledgerId); if (lf == null) { throw new Bookie.NoLedgerException(ledgerId); } evictFileInfoIfNecessary(); fi = new FileInfo(lf, null); byte[] key = fi.getMasterKey(); fileInfoCache.put(ledgerId, fi); openLedgers.add(ledgerId); return key; } return fi.getMasterKey(); } } // evict file info if necessary private void evictFileInfoIfNecessary() throws IOException { synchronized (fileInfoCache) { if (openLedgers.size() > openFileLimit) { long ledgerToRemove = openLedgers.removeFirst(); // TODO Add a statistic here, we don't care really which // ledger is evicted, but the rate at which they get evicted LOG.debug("Ledger {} is evicted from file info cache.", ledgerToRemove); FileInfo fi = fileInfoCache.remove(ledgerToRemove); if (fi != null) { fi.close(true); } } } } @Override public boolean setFenced(long ledgerId) throws IOException { FileInfo fi = null; try { fi = getFileInfo(ledgerId, null); if (null != fi) { return fi.setFenced(); } return false; } finally { if (null != fi) { fi.release(); } } } @Override public boolean isFenced(long ledgerId) throws IOException { FileInfo fi = null; try { fi = getFileInfo(ledgerId, null); if (null != fi) { return fi.isFenced(); } return false; } finally { if (null != fi) { fi.release(); } } } @Override public void setMasterKey(long ledgerId, byte[] masterKey) throws IOException { FileInfo fi = null; try { fi = getFileInfo(ledgerId, masterKey); } finally { if (null != fi) { fi.release(); } } } @Override public boolean ledgerExists(long ledgerId) throws IOException { synchronized(fileInfoCache) { FileInfo fi = fileInfoCache.get(ledgerId); if (fi == null) { File lf = findIndexFile(ledgerId); if (lf == null) { return false; } } } return true; } @Override public LedgerCacheBean getJMXBean() { return new LedgerCacheBean() { @Override public String getName() { return "LedgerCache"; } @Override public boolean isHidden() { return false; } @Override public int getPageCount() { return LedgerCacheImpl.this.getNumUsedPages(); } @Override public int getPageSize() { return LedgerCacheImpl.this.getPageSize(); } @Override public int getOpenFileLimit() { return openFileLimit; } @Override public int getPageLimit() { return LedgerCacheImpl.this.getPageLimit(); } @Override public int getNumCleanLedgers() { return cleanLedgers.size(); } @Override public int getNumDirtyLedgers() { return dirtyLedgers.size(); } @Override public int getNumOpenLedgers() { return openLedgers.size(); } }; } @Override public void close() throws IOException { synchronized (fileInfoCache) { for (Entry fileInfo : fileInfoCache.entrySet()) { FileInfo value = fileInfo.getValue(); if (value != null) { value.close(true); } } fileInfoCache.clear(); } } } LedgerCacheMXBean.java000066400000000000000000000030011244507361200347240ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.bookie; /** * Ledger Cache MBean */ public interface LedgerCacheMXBean { /** * @return number of page used in cache */ public int getPageCount(); /** * @return page size */ public int getPageSize(); /** * @return the limit of open files */ public int getOpenFileLimit(); /** * @return the limit number of pages */ public int getPageLimit(); /** * @return number of clean ledgers */ public int getNumCleanLedgers(); /** * @return number of dirty ledgers */ public int getNumDirtyLedgers(); /** * @return number of open ledgers */ public int getNumOpenLedgers(); } LedgerDescriptor.java000066400000000000000000000044061244507361200350160ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; import java.util.Arrays; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Implements a ledger inside a bookie. In particular, it implements operations * to write entries to a ledger and read entries from a ledger. */ public abstract class LedgerDescriptor { static LedgerDescriptor create(byte[] masterKey, long ledgerId, LedgerStorage ledgerStorage) throws IOException { LedgerDescriptor ledger = new LedgerDescriptorImpl(masterKey, ledgerId, ledgerStorage); ledgerStorage.setMasterKey(ledgerId, masterKey); return ledger; } static LedgerDescriptor createReadOnly(long ledgerId, LedgerStorage ledgerStorage) throws IOException, Bookie.NoLedgerException { if (!ledgerStorage.ledgerExists(ledgerId)) { throw new Bookie.NoLedgerException(ledgerId); } return new LedgerDescriptorReadOnlyImpl(ledgerId, ledgerStorage); } abstract void checkAccess(byte masterKey[]) throws BookieException, IOException; abstract long getLedgerId(); abstract boolean setFenced() throws IOException; abstract boolean isFenced() throws IOException; abstract long addEntry(ByteBuffer entry) throws IOException; abstract ByteBuffer readEntry(long entryId) throws IOException; } LedgerDescriptorImpl.java000066400000000000000000000051231244507361200356350ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; import java.util.Arrays; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Implements a ledger inside a bookie. In particular, it implements operations * to write entries to a ledger and read entries from a ledger. * */ public class LedgerDescriptorImpl extends LedgerDescriptor { final static Logger LOG = LoggerFactory.getLogger(LedgerDescriptor.class); final LedgerStorage ledgerStorage; private long ledgerId; final byte[] masterKey; LedgerDescriptorImpl(byte[] masterKey, long ledgerId, LedgerStorage ledgerStorage) { this.masterKey = masterKey; this.ledgerId = ledgerId; this.ledgerStorage = ledgerStorage; } @Override void checkAccess(byte masterKey[]) throws BookieException, IOException { if (!Arrays.equals(this.masterKey, masterKey)) { throw BookieException.create(BookieException.Code.UnauthorizedAccessException); } } @Override public long getLedgerId() { return ledgerId; } @Override boolean setFenced() throws IOException { return ledgerStorage.setFenced(ledgerId); } @Override boolean isFenced() throws IOException { return ledgerStorage.isFenced(ledgerId); } @Override long addEntry(ByteBuffer entry) throws IOException { long ledgerId = entry.getLong(); if (ledgerId != this.ledgerId) { throw new IOException("Entry for ledger " + ledgerId + " was sent to " + this.ledgerId); } entry.rewind(); return ledgerStorage.addEntry(entry); } @Override ByteBuffer readEntry(long entryId) throws IOException { return ledgerStorage.getEntry(ledgerId, entryId); } } LedgerDescriptorReadOnlyImpl.java000066400000000000000000000033671244507361200373030ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; /** * Implements a ledger inside a bookie. In particular, it implements operations * to write entries to a ledger and read entries from a ledger. */ public class LedgerDescriptorReadOnlyImpl extends LedgerDescriptorImpl { LedgerDescriptorReadOnlyImpl(long ledgerId, LedgerStorage storage) { super(null, ledgerId, storage); } @Override boolean setFenced() throws IOException { assert false; throw new IOException("Invalid action on read only descriptor"); } @Override long addEntry(ByteBuffer entry) throws IOException { assert false; throw new IOException("Invalid action on read only descriptor"); } @Override void checkAccess(byte masterKey[]) throws BookieException, IOException { assert false; throw new IOException("Invalid action on read only descriptor"); } } LedgerDirsManager.java000066400000000000000000000220601244507361200350700ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.Random; import com.google.common.annotations.VisibleForTesting; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.util.DiskChecker; import org.apache.bookkeeper.util.DiskChecker.DiskErrorException; import org.apache.bookkeeper.util.DiskChecker.DiskOutOfSpaceException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This class manages ledger directories used by the bookie. */ public class LedgerDirsManager { private static Logger LOG = LoggerFactory .getLogger(LedgerDirsManager.class); private volatile List filledDirs; private final List ledgerDirectories; private volatile List writableLedgerDirectories; private DiskChecker diskChecker; private List listeners; private LedgerDirsMonitor monitor; private final Random rand = new Random(); public LedgerDirsManager(ServerConfiguration conf) { this.ledgerDirectories = Arrays.asList(Bookie .getCurrentDirectories(conf.getLedgerDirs())); this.writableLedgerDirectories = new ArrayList(ledgerDirectories); this.filledDirs = new ArrayList(); listeners = new ArrayList(); diskChecker = new DiskChecker(conf.getDiskUsageThreshold()); monitor = new LedgerDirsMonitor(conf.getDiskCheckInterval()); } /** * Get all ledger dirs configured */ public List getAllLedgerDirs() { return ledgerDirectories; } /** * Get only writable ledger dirs. */ public List getWritableLedgerDirs() throws NoWritableLedgerDirException { if (writableLedgerDirectories.isEmpty()) { String errMsg = "All ledger directories are non writable"; NoWritableLedgerDirException e = new NoWritableLedgerDirException( errMsg); LOG.error(errMsg, e); throw e; } return writableLedgerDirectories; } /** * Get dirs, which are full more than threshold */ public boolean isDirFull(File dir) { return filledDirs.contains(dir); } /** * Add the dir to filled dirs list */ @VisibleForTesting public void addToFilledDirs(File dir) { if (!filledDirs.contains(dir)) { LOG.warn(dir + " is out of space." + " Adding it to filled dirs list"); // Update filled dirs list List updatedFilledDirs = new ArrayList(filledDirs); updatedFilledDirs.add(dir); filledDirs = updatedFilledDirs; // Update the writable ledgers list List newDirs = new ArrayList(writableLedgerDirectories); newDirs.removeAll(filledDirs); writableLedgerDirectories = newDirs; // Notify listeners about disk full for (LedgerDirsListener listener : listeners) { listener.diskFull(dir); } } } /** * Returns one of the ledger dir from writable dirs list randomly. */ File pickRandomWritableDir() throws NoWritableLedgerDirException { return pickRandomWritableDir(null); } /** * Pick up a writable dir from available dirs list randomly. The excludedDir * will not be pickedup. * * @param excludedDir * The directory to exclude during pickup. * @throws NoWritableLedgerDirException if there is no writable dir available. */ File pickRandomWritableDir(File excludedDir) throws NoWritableLedgerDirException { List writableDirs = getWritableLedgerDirs(); final int start = rand.nextInt(writableDirs.size()); int idx = start; File candidate = writableDirs.get(idx); while (null != excludedDir && excludedDir.equals(candidate)) { idx = (idx + 1) % writableDirs.size(); if (idx == start) { // after searching all available dirs, // no writable dir is found throw new NoWritableLedgerDirException("No writable directories found from " + " available writable dirs (" + writableDirs + ") : exclude dir " + excludedDir); } candidate = writableDirs.get(idx); } return candidate; } public void addLedgerDirsListener(LedgerDirsListener listener) { if (listener != null) { listeners.add(listener); } } // start the daemon for disk monitoring public void start() { monitor.setDaemon(true); monitor.start(); } // shutdown disk monitoring daemon public void shutdown() { monitor.interrupt(); try { monitor.join(); } catch (InterruptedException e) { // Ignore } } /** * Thread to monitor the disk space periodically. */ private class LedgerDirsMonitor extends Thread { private final int interval; public LedgerDirsMonitor(int interval) { super("LedgerDirsMonitorThread"); this.interval = interval; } @Override public void run() { try { while (true) { List writableDirs; try { writableDirs = getWritableLedgerDirs(); } catch (NoWritableLedgerDirException e) { for (LedgerDirsListener listener : listeners) { listener.allDisksFull(); } break; } // Check all writable dirs disk space usage. for (File dir : writableDirs) { try { diskChecker.checkDir(dir); } catch (DiskErrorException e) { // Notify disk failure to all listeners for (LedgerDirsListener listener : listeners) { listener.diskFailed(dir); } } catch (DiskOutOfSpaceException e) { // Notify disk full to all listeners addToFilledDirs(dir); } } try { Thread.sleep(interval); } catch (InterruptedException e) { LOG.info("LedgerDirsMonitor thread is interrupted"); break; } } } catch (Exception e) { LOG.error("Error Occured while checking disks", e); // Notify disk failure to all listeners for (LedgerDirsListener listener : listeners) { listener.fatalError(); } } LOG.info("LedgerDirsMonitorThread exited!"); } } /** * Indicates All configured ledger directories are full. */ public static class NoWritableLedgerDirException extends IOException { private static final long serialVersionUID = -8696901285061448421L; public NoWritableLedgerDirException(String errMsg) { super(errMsg); } } /** * Listener for the disk check events will be notified from the * {@link LedgerDirsManager} whenever disk full/failure detected. */ public static interface LedgerDirsListener { /** * This will be notified on disk failure/disk error * * @param disk * Failed disk */ void diskFailed(File disk); /** * This will be notified on disk detected as full * * @param disk * Filled disk */ void diskFull(File disk); /** * This will be notified whenever all disks are detected as full. */ void allDisksFull(); /** * This will notify the fatal errors. */ void fatalError(); } } LedgerEntryPage.java000066400000000000000000000115071244507361200345760ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; import org.apache.bookkeeper.proto.BookieProtocol; /** * This is a page in the LedgerCache. It holds the locations * (entrylogfile, offset) for entry ids. */ public class LedgerEntryPage { private final int pageSize; private final int entriesPerPage; private long ledger = -1; private long firstEntry = BookieProtocol.INVALID_ENTRY_ID; private final ByteBuffer page; private boolean clean = true; private boolean pinned = false; private int useCount; private int version; public LedgerEntryPage(int pageSize, int entriesPerPage) { this.pageSize = pageSize; this.entriesPerPage = entriesPerPage; page = ByteBuffer.allocateDirect(pageSize); } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(getLedger()); sb.append('@'); sb.append(getFirstEntry()); sb.append(clean ? " clean " : " dirty "); sb.append(useCount); return sb.toString(); } synchronized public void usePage() { useCount++; } synchronized public void pin() { pinned = true; } synchronized public void unpin() { pinned = false; } synchronized public boolean isPinned() { return pinned; } synchronized public void releasePage() { useCount--; if (useCount < 0) { throw new IllegalStateException("Use count has gone below 0"); } } synchronized private void checkPage() { if (useCount <= 0) { throw new IllegalStateException("Page not marked in use"); } } @Override public boolean equals(Object other) { if (other instanceof LedgerEntryPage) { LedgerEntryPage otherLEP = (LedgerEntryPage) other; return otherLEP.getLedger() == getLedger() && otherLEP.getFirstEntry() == getFirstEntry(); } else { return false; } } @Override public int hashCode() { return (int)getLedger() ^ (int)(getFirstEntry()); } void setClean(int versionOfCleaning) { this.clean = (versionOfCleaning == version); } boolean isClean() { return clean; } public void setOffset(long offset, int position) { checkPage(); version++; this.clean = false; page.putLong(position, offset); } public long getOffset(int position) { checkPage(); return page.getLong(position); } static final byte zeroPage[] = new byte[64*1024]; public void zeroPage() { checkPage(); page.clear(); page.put(zeroPage, 0, page.remaining()); clean = true; } public void readPage(FileInfo fi) throws IOException { checkPage(); page.clear(); while(page.remaining() != 0) { if (fi.read(page, getFirstEntry()*8) <= 0) { throw new IOException("Short page read of ledger " + getLedger() + " tried to get " + page.capacity() + " from position " + getFirstEntry()*8 + " still need " + page.remaining()); } } clean = true; } public ByteBuffer getPageToWrite() { checkPage(); page.clear(); return page; } void setLedger(long ledger) { this.ledger = ledger; } long getLedger() { return ledger; } int getVersion() { return version; } void setFirstEntry(long firstEntry) { if (firstEntry % entriesPerPage != 0) { throw new IllegalArgumentException(firstEntry + " is not a multiple of " + entriesPerPage); } this.firstEntry = firstEntry; } long getFirstEntry() { return firstEntry; } public boolean inUse() { return useCount > 0; } public long getLastEntry() { for(int i = entriesPerPage - 1; i >= 0; i--) { if (getOffset(i*8) > 0) { return i + firstEntry; } } return 0; } } LedgerStorage.java000066400000000000000000000057511244507361200343100ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; import org.apache.bookkeeper.jmx.BKMBeanInfo; /** * Interface for storing ledger data * on persistant storage. */ interface LedgerStorage { /** * Start any background threads * belonging to the storage system. For example, * garbage collection. */ void start(); /** * Cleanup and free any resources * being used by the storage system. */ void shutdown() throws InterruptedException; /** * Whether a ledger exists */ boolean ledgerExists(long ledgerId) throws IOException; /** * Fenced the ledger id in ledger storage. * * @param ledgerId * Ledger Id. * @throws IOException when failed to fence the ledger. */ boolean setFenced(long ledgerId) throws IOException; /** * Check whether the ledger is fenced in ledger storage or not. * * @param ledgerId * Ledger ID. * @throws IOException */ boolean isFenced(long ledgerId) throws IOException; /** * Set the master key for a ledger */ void setMasterKey(long ledgerId, byte[] masterKey) throws IOException; /** * Get the master key for a ledger * @throws IOException if there is an error reading the from the ledger * @throws BookieException if no such ledger exists */ byte[] readMasterKey(long ledgerId) throws IOException, BookieException; /** * Add an entry to the storage. * @return the entry id of the entry added */ long addEntry(ByteBuffer entry) throws IOException; /** * Read an entry from storage */ ByteBuffer getEntry(long ledgerId, long entryId) throws IOException; /** * Whether there is data in the storage which needs to be flushed */ boolean isFlushRequired(); /** * Flushes all data in the storage. Once this is called, * add data written to the LedgerStorage up until this point * has been persisted to perminant storage */ void flush() throws IOException; /** * Get the JMX management bean for this LedgerStorage */ BKMBeanInfo getJMXBean(); } MarkerFileChannel.java000066400000000000000000000076021244507361200350700ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.MappedByteBuffer; import java.nio.channels.FileChannel; import java.nio.channels.FileLock; import java.nio.channels.ReadableByteChannel; import java.nio.channels.WritableByteChannel; /** * This class is just a stub that can be used in collections with * FileChannels */ public class MarkerFileChannel extends FileChannel { @Override public void force(boolean metaData) throws IOException { // TODO Auto-generated method stub } @Override public FileLock lock(long position, long size, boolean shared) throws IOException { // TODO Auto-generated method stub return null; } @Override public MappedByteBuffer map(MapMode mode, long position, long size) throws IOException { // TODO Auto-generated method stub return null; } @Override public long position() throws IOException { // TODO Auto-generated method stub return 0; } @Override public FileChannel position(long newPosition) throws IOException { // TODO Auto-generated method stub return null; } @Override public int read(ByteBuffer dst) throws IOException { // TODO Auto-generated method stub return 0; } @Override public int read(ByteBuffer dst, long position) throws IOException { // TODO Auto-generated method stub return 0; } @Override public long read(ByteBuffer[] dsts, int offset, int length) throws IOException { // TODO Auto-generated method stub return 0; } @Override public long size() throws IOException { // TODO Auto-generated method stub return 0; } @Override public long transferFrom(ReadableByteChannel src, long position, long count) throws IOException { // TODO Auto-generated method stub return 0; } @Override public long transferTo(long position, long count, WritableByteChannel target) throws IOException { // TODO Auto-generated method stub return 0; } @Override public FileChannel truncate(long size) throws IOException { // TODO Auto-generated method stub return null; } @Override public FileLock tryLock(long position, long size, boolean shared) throws IOException { // TODO Auto-generated method stub return null; } @Override public int write(ByteBuffer src) throws IOException { // TODO Auto-generated method stub return 0; } @Override public int write(ByteBuffer src, long position) throws IOException { // TODO Auto-generated method stub return 0; } @Override public long write(ByteBuffer[] srcs, int offset, int length) throws IOException { // TODO Auto-generated method stub return 0; } @Override protected void implCloseChannel() throws IOException { // TODO Auto-generated method stub } } ReadOnlyEntryLogger.java000066400000000000000000000034161244507361200354540ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; import java.nio.ByteBuffer; import org.apache.bookkeeper.conf.ServerConfiguration; /** * Read Only Entry Logger */ public class ReadOnlyEntryLogger extends EntryLogger { public ReadOnlyEntryLogger(ServerConfiguration conf) throws IOException { super(conf, new LedgerDirsManager(conf)); } @Override protected void initialize() throws IOException { // do nothing for read only entry logger } @Override void createNewLog() throws IOException { throw new IOException("Can't create new entry log using a readonly entry logger."); } @Override protected boolean removeEntryLog(long entryLogId) { // can't remove entry log in readonly mode return false; } @Override synchronized long addEntry(long ledger, ByteBuffer entry) throws IOException { throw new IOException("Can't add entry to a readonly entry logger."); } } ReadOnlyFileInfo.java000066400000000000000000000024521244507361200347050ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.ByteBuffer; import java.nio.BufferUnderflowException; import java.nio.channels.FileChannel; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Provide a readonly file info. */ class ReadOnlyFileInfo extends FileInfo { public ReadOnlyFileInfo(File lf, byte[] masterKey) throws IOException { super(lf, masterKey); mode = "r"; } } ScanAndCompareGarbageCollector.java000066400000000000000000000101721244507361200375100ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.util.Map; import java.util.NavigableMap; import java.util.Set; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerManager.LedgerRange; import org.apache.bookkeeper.meta.LedgerManager.LedgerRangeIterator; import org.apache.bookkeeper.util.SnapshotMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Garbage collector implementation using scan and compare. * *

* Garbage collection is processed as below: *

    *
  • fetch all existing ledgers from zookeeper or metastore according to * the LedgerManager, called globalActiveLedgers *
  • fetch all active ledgers from bookie server, said bkActiveLedgers *
  • loop over bkActiveLedgers to find those ledgers that are not in * globalActiveLedgers, do garbage collection on them. *
*

*/ public class ScanAndCompareGarbageCollector implements GarbageCollector{ static final Logger LOG = LoggerFactory.getLogger(ScanAndCompareGarbageCollector.class); private SnapshotMap activeLedgers; private LedgerManager ledgerManager; public ScanAndCompareGarbageCollector(LedgerManager ledgerManager, SnapshotMap activeLedgers) { this.ledgerManager = ledgerManager; this.activeLedgers = activeLedgers; } @Override public void gc(GarbageCleaner garbageCleaner) { // create a snapshot first NavigableMap bkActiveLedgersSnapshot = this.activeLedgers.snapshot(); LedgerRangeIterator ledgerRangeIterator = ledgerManager.getLedgerRanges(); try { // Empty global active ledgers, need to remove all local active ledgers. if (!ledgerRangeIterator.hasNext()) { for (Long bkLid : bkActiveLedgersSnapshot.keySet()) { // remove it from current active ledger bkActiveLedgersSnapshot.remove(bkLid); garbageCleaner.clean(bkLid); } } long lastEnd = -1; while(ledgerRangeIterator.hasNext()) { LedgerRange lRange = ledgerRangeIterator.next(); Map subBkActiveLedgers = null; Long start = lastEnd + 1; Long end = lRange.end(); if (!ledgerRangeIterator.hasNext()) { end = Long.MAX_VALUE; } subBkActiveLedgers = bkActiveLedgersSnapshot.subMap( start, true, end, true); Set ledgersInMetadata = lRange.getLedgers(); LOG.debug("Active in metadata {}, Active in bookie {}", ledgersInMetadata, subBkActiveLedgers.keySet()); for (Long bkLid : subBkActiveLedgers.keySet()) { if (!ledgersInMetadata.contains(bkLid)) { // remove it from current active ledger subBkActiveLedgers.remove(bkLid); garbageCleaner.clean(bkLid); } } lastEnd = end; } } catch (Exception e) { // ignore exception, collecting garbage next time LOG.warn("Exception when iterating over the metadata {}", e); } } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/000077500000000000000000000000001244507361200307735ustar00rootroot00000000000000AsyncCallback.java000066400000000000000000000103371244507361200342550ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; import java.util.Enumeration; /** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with this * work for additional information regarding copyright ownership. The ASF * licenses this file to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations under * the License. */ public interface AsyncCallback { public interface AddCallback { /** * Callback declaration * * @param rc * return code * @param lh * ledger handle * @param entryId * entry identifier * @param ctx * context object */ void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx); } public interface CloseCallback { /** * Callback definition * * @param rc * return code * @param lh * ledger handle * @param ctx * context object */ void closeComplete(int rc, LedgerHandle lh, Object ctx); } public interface CreateCallback { /** * Declaration of callback method * * @param rc * return status * @param lh * ledger handle * @param ctx * context object */ void createComplete(int rc, LedgerHandle lh, Object ctx); } public interface OpenCallback { /** * Callback for asynchronous call to open ledger * * @param rc * Return code * @param lh * ledger handle * @param ctx * context object */ public void openComplete(int rc, LedgerHandle lh, Object ctx); } public interface ReadCallback { /** * Callback declaration * * @param rc * return code * @param lh * ledger handle * @param seq * sequence of entries * @param ctx * context object */ void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx); } public interface DeleteCallback { /** * Callback definition for delete operations * * @param rc * return code * @param ctx * context object */ void deleteComplete(int rc, Object ctx); } public interface ReadLastConfirmedCallback { /** * Callback definition for bookie recover operations * * @param rc Return code * @param lastConfirmed The entry id of the last confirmed write or * {@link LedgerHandle#INVALID_ENTRY_ID INVALID_ENTRY_ID} * if no entry has been confirmed * @param ctx * context object */ void readLastConfirmedComplete(int rc, long lastConfirmed, Object ctx); } public interface RecoverCallback { /** * Callback definition for bookie recover operations * * @param rc * return code * @param ctx * context object */ void recoverComplete(int rc, Object ctx); } public interface IsClosedCallback { /** * Callback definition for isClosed operation * * @param rc * return code * @param isClosed * true if ledger is closed */ void isClosedComplete(int rc, boolean isClosed, Object ctx); } } BKException.java000066400000000000000000000275521244507361200337450ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.lang.Exception; /** * Class the enumerates all the possible error conditions * */ @SuppressWarnings("serial") public abstract class BKException extends Exception { private int code; BKException(int code) { this.code = code; } /** * Create an exception from an error code * @param code return error code * @return correponding exception */ public static BKException create(int code) { switch (code) { case Code.ReadException: return new BKReadException(); case Code.QuorumException: return new BKQuorumException(); case Code.NoBookieAvailableException: return new BKBookieException(); case Code.DigestNotInitializedException: return new BKDigestNotInitializedException(); case Code.DigestMatchException: return new BKDigestMatchException(); case Code.NotEnoughBookiesException: return new BKNotEnoughBookiesException(); case Code.NoSuchLedgerExistsException: return new BKNoSuchLedgerExistsException(); case Code.BookieHandleNotAvailableException: return new BKBookieHandleNotAvailableException(); case Code.ZKException: return new ZKException(); case Code.MetaStoreException: return new MetaStoreException(); case Code.LedgerRecoveryException: return new BKLedgerRecoveryException(); case Code.LedgerClosedException: return new BKLedgerClosedException(); case Code.WriteException: return new BKWriteException(); case Code.NoSuchEntryException: return new BKNoSuchEntryException(); case Code.IncorrectParameterException: return new BKIncorrectParameterException(); case Code.InterruptedException: return new BKInterruptedException(); case Code.ProtocolVersionException: return new BKProtocolVersionException(); case Code.MetadataVersionException: return new BKMetadataVersionException(); case Code.LedgerFencedException: return new BKLedgerFencedException(); case Code.UnauthorizedAccessException: return new BKUnauthorizedAccessException(); case Code.UnclosedFragmentException: return new BKUnclosedFragmentException(); case Code.WriteOnReadOnlyBookieException: return new BKWriteOnReadOnlyBookieException(); case Code.ReplicationException: return new BKReplicationException(); case Code.IllegalOpException: return new BKIllegalOpException(); default: return new BKUnexpectedConditionException(); } } /** * List of return codes * */ public interface Code { int OK = 0; int ReadException = -1; int QuorumException = -2; int NoBookieAvailableException = -3; int DigestNotInitializedException = -4; int DigestMatchException = -5; int NotEnoughBookiesException = -6; int NoSuchLedgerExistsException = -7; int BookieHandleNotAvailableException = -8; int ZKException = -9; int LedgerRecoveryException = -10; int LedgerClosedException = -11; int WriteException = -12; int NoSuchEntryException = -13; int IncorrectParameterException = -14; int InterruptedException = -15; int ProtocolVersionException = -16; int MetadataVersionException = -17; int MetaStoreException = -18; int IllegalOpException = -100; int LedgerFencedException = -101; int UnauthorizedAccessException = -102; int UnclosedFragmentException = -103; int WriteOnReadOnlyBookieException = -104; // generic exception code used to propagate in replication pipeline int ReplicationException = -200; // For all unexpected error conditions int UnexpectedConditionException = -999; } public void setCode(int code) { this.code = code; } public int getCode() { return this.code; } public static String getMessage(int code) { switch (code) { case Code.OK: return "No problem"; case Code.ReadException: return "Error while reading ledger"; case Code.QuorumException: return "Invalid quorum size on ensemble size"; case Code.NoBookieAvailableException: return "Invalid quorum size on ensemble size"; case Code.DigestNotInitializedException: return "Digest engine not initialized"; case Code.DigestMatchException: return "Entry digest does not match"; case Code.NotEnoughBookiesException: return "Not enough non-faulty bookies available"; case Code.NoSuchLedgerExistsException: return "No such ledger exists"; case Code.BookieHandleNotAvailableException: return "Bookie handle is not available"; case Code.ZKException: return "Error while using ZooKeeper"; case Code.MetaStoreException: return "Error while using MetaStore"; case Code.LedgerRecoveryException: return "Error while recovering ledger"; case Code.LedgerClosedException: return "Attempt to write to a closed ledger"; case Code.WriteException: return "Write failed on bookie"; case Code.NoSuchEntryException: return "No such entry"; case Code.IncorrectParameterException: return "Incorrect parameter input"; case Code.InterruptedException: return "Interrupted while waiting for permit"; case Code.ProtocolVersionException: return "Bookie protocol version on server is incompatible with client"; case Code.MetadataVersionException: return "Bad ledger metadata version"; case Code.LedgerFencedException: return "Ledger has been fenced off. Some other client must have opened it to read"; case Code.UnauthorizedAccessException: return "Attempted to access ledger using the wrong password"; case Code.UnclosedFragmentException: return "Attempting to use an unclosed fragment; This is not safe"; case Code.WriteOnReadOnlyBookieException: return "Attempting to write on ReadOnly bookie"; case Code.ReplicationException: return "Errors in replication pipeline"; case Code.IllegalOpException: return "Invalid operation"; default: return "Unexpected condition"; } } public static class BKReadException extends BKException { public BKReadException() { super(Code.ReadException); } } public static class BKNoSuchEntryException extends BKException { public BKNoSuchEntryException() { super(Code.NoSuchEntryException); } } public static class BKQuorumException extends BKException { public BKQuorumException() { super(Code.QuorumException); } } public static class BKBookieException extends BKException { public BKBookieException() { super(Code.NoBookieAvailableException); } } public static class BKDigestNotInitializedException extends BKException { public BKDigestNotInitializedException() { super(Code.DigestNotInitializedException); } } public static class BKDigestMatchException extends BKException { public BKDigestMatchException() { super(Code.DigestMatchException); } } public static class BKIllegalOpException extends BKException { public BKIllegalOpException() { super(Code.IllegalOpException); } } public static class BKUnexpectedConditionException extends BKException { public BKUnexpectedConditionException() { super(Code.UnexpectedConditionException); } } public static class BKNotEnoughBookiesException extends BKException { public BKNotEnoughBookiesException() { super(Code.NotEnoughBookiesException); } } public static class BKWriteException extends BKException { public BKWriteException() { super(Code.WriteException); } } public static class BKProtocolVersionException extends BKException { public BKProtocolVersionException() { super(Code.ProtocolVersionException); } } public static class BKMetadataVersionException extends BKException { public BKMetadataVersionException() { super(Code.MetadataVersionException); } } public static class BKNoSuchLedgerExistsException extends BKException { public BKNoSuchLedgerExistsException() { super(Code.NoSuchLedgerExistsException); } } public static class BKBookieHandleNotAvailableException extends BKException { public BKBookieHandleNotAvailableException() { super(Code.BookieHandleNotAvailableException); } } public static class ZKException extends BKException { public ZKException() { super(Code.ZKException); } } public static class MetaStoreException extends BKException { public MetaStoreException() { super(Code.MetaStoreException); } } public static class BKLedgerRecoveryException extends BKException { public BKLedgerRecoveryException() { super(Code.LedgerRecoveryException); } } public static class BKLedgerClosedException extends BKException { public BKLedgerClosedException() { super(Code.LedgerClosedException); } } public static class BKIncorrectParameterException extends BKException { public BKIncorrectParameterException() { super(Code.IncorrectParameterException); } } public static class BKInterruptedException extends BKException { public BKInterruptedException() { super(Code.InterruptedException); } } public static class BKLedgerFencedException extends BKException { public BKLedgerFencedException() { super(Code.LedgerFencedException); } } public static class BKUnauthorizedAccessException extends BKException { public BKUnauthorizedAccessException() { super(Code.UnauthorizedAccessException); } } public static class BKUnclosedFragmentException extends BKException { public BKUnclosedFragmentException() { super(Code.UnclosedFragmentException); } } public static class BKWriteOnReadOnlyBookieException extends BKException { public BKWriteOnReadOnlyBookieException() { super(Code.WriteOnReadOnlyBookieException); } } public static class BKReplicationException extends BKException { public BKReplicationException() { super(Code.ReplicationException); } } } BookKeeper.java000066400000000000000000000627411244507361200336170ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.io.IOException; import java.util.concurrent.CountDownLatch; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.client.AsyncCallback.CreateCallback; import org.apache.bookkeeper.client.AsyncCallback.DeleteCallback; import org.apache.bookkeeper.client.AsyncCallback.OpenCallback; import org.apache.bookkeeper.client.AsyncCallback.IsClosedCallback; import org.apache.bookkeeper.client.BKException.Code; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.proto.BookieClient; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * BookKeeper client. We assume there is one single writer to a ledger at any * time. * * There are four possible operations: start a new ledger, write to a ledger, * read from a ledger and delete a ledger. * * The exceptions resulting from synchronous calls and error code resulting from * asynchronous calls can be found in the class {@link BKException}. * * */ public class BookKeeper { static final Logger LOG = LoggerFactory.getLogger(BookKeeper.class); final ZooKeeper zk; final CountDownLatch connectLatch = new CountDownLatch(1); final static int zkConnectTimeoutMs = 5000; final ClientSocketChannelFactory channelFactory; // whether the socket factory is one we created, or is owned by whoever // instantiated us boolean ownChannelFactory = false; // whether the zk handle is one we created, or is owned by whoever // instantiated us boolean ownZKHandle = false; final BookieClient bookieClient; final BookieWatcher bookieWatcher; final OrderedSafeExecutor mainWorkerPool; final ScheduledExecutorService scheduler; // Ledger manager responsible for how to store ledger meta data final LedgerManagerFactory ledgerManagerFactory; final LedgerManager ledgerManager; final ClientConfiguration conf; interface ZKConnectCallback { public void connected(); public void connectionFailed(int code); } /** * Create a bookkeeper client. A zookeeper client and a client socket factory * will be instantiated as part of this constructor. * * @param servers * A list of one of more servers on which zookeeper is running. The * client assumes that the running bookies have been registered with * zookeeper under the path * {@link BookieWatcher#bookieRegistrationPath} * @throws IOException * @throws InterruptedException * @throws KeeperException */ public BookKeeper(String servers) throws IOException, InterruptedException, KeeperException { this(new ClientConfiguration().setZkServers(servers)); } /** * Create a bookkeeper client using a configuration object. * A zookeeper client and a client socket factory will be * instantiated as part of this constructor. * * @param conf * Client Configuration object * @throws IOException * @throws InterruptedException * @throws KeeperException */ public BookKeeper(final ClientConfiguration conf) throws IOException, InterruptedException, KeeperException { this.conf = conf; ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(conf.getZkTimeout()); this.zk = ZkUtils .createConnectedZookeeperClient(conf.getZkServers(), w); this.channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); this.scheduler = Executors.newSingleThreadScheduledExecutor(); mainWorkerPool = new OrderedSafeExecutor(conf.getNumWorkerThreads()); bookieClient = new BookieClient(conf, channelFactory, mainWorkerPool); bookieWatcher = new BookieWatcher(conf, scheduler, this); bookieWatcher.readBookiesBlocking(); ledgerManagerFactory = LedgerManagerFactory.newLedgerManagerFactory(conf, zk); ledgerManager = ledgerManagerFactory.newLedgerManager(); ownChannelFactory = true; ownZKHandle = true; } /** * Create a bookkeeper client but use the passed in zookeeper client instead * of instantiating one. * * @param conf * Client Configuration object * {@link ClientConfiguration} * @param zk * Zookeeper client instance connected to the zookeeper with which * the bookies have registered * @throws IOException * @throws InterruptedException * @throws KeeperException */ public BookKeeper(ClientConfiguration conf, ZooKeeper zk) throws IOException, InterruptedException, KeeperException { this(conf, zk, new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool())); ownChannelFactory = true; } /** * Create a bookkeeper client but use the passed in zookeeper client and * client socket channel factory instead of instantiating those. * * @param conf * Client Configuration Object * {@link ClientConfiguration} * @param zk * Zookeeper client instance connected to the zookeeper with which * the bookies have registered. The ZooKeeper client must be connected * before it is passed to BookKeeper. Otherwise a KeeperException is thrown. * @param channelFactory * A factory that will be used to create connections to the bookies * @throws IOException * @throws InterruptedException * @throws KeeperException if the passed zk handle is not connected */ public BookKeeper(ClientConfiguration conf, ZooKeeper zk, ClientSocketChannelFactory channelFactory) throws IOException, InterruptedException, KeeperException { if (zk == null || channelFactory == null) { throw new NullPointerException(); } if (!zk.getState().isConnected()) { LOG.error("Unconnected zookeeper handle passed to bookkeeper"); throw KeeperException.create(KeeperException.Code.CONNECTIONLOSS); } this.conf = conf; this.zk = zk; this.channelFactory = channelFactory; this.scheduler = Executors.newSingleThreadScheduledExecutor(); mainWorkerPool = new OrderedSafeExecutor(conf.getNumWorkerThreads()); bookieClient = new BookieClient(conf, channelFactory, mainWorkerPool); bookieWatcher = new BookieWatcher(conf, scheduler, this); bookieWatcher.readBookiesBlocking(); ledgerManagerFactory = LedgerManagerFactory.newLedgerManagerFactory(conf, zk); ledgerManager = ledgerManagerFactory.newLedgerManager(); } LedgerManager getLedgerManager() { return ledgerManager; } /** * There are 2 digest types that can be used for verification. The CRC32 is * cheap to compute but does not protect against byzantine bookies (i.e., a * bookie might report fake bytes and a matching CRC32). The MAC code is more * expensive to compute, but is protected by a password, i.e., a bookie can't * report fake bytes with a mathching MAC unless it knows the password */ public enum DigestType { MAC, CRC32 }; ZooKeeper getZkHandle() { return zk; } protected ClientConfiguration getConf() { return conf; } /** * Get the BookieClient, currently used for doing bookie recovery. * * @return BookieClient for the BookKeeper instance. */ BookieClient getBookieClient() { return bookieClient; } /** * Creates a new ledger asynchronously. To create a ledger, we need to specify * the ensemble size, the quorum size, the digest type, a password, a callback * implementation, and an optional control object. The ensemble size is how * many bookies the entries should be striped among and the quorum size is the * degree of replication of each entry. The digest type is either a MAC or a * CRC. Note that the CRC option is not able to protect a client against a * bookie that replaces an entry. The password is used not only to * authenticate access to a ledger, but also to verify entries in ledgers. * * @param ensSize * number of bookies over which to stripe entries * @param writeQuorumSize * number of bookies each entry will be written to. each of these bookies * must acknowledge the entry before the call is completed. * @param digestType * digest type, either MAC or CRC32 * @param passwd * password * @param cb * createCallback implementation * @param ctx * optional control object */ public void asyncCreateLedger(final int ensSize, final int writeQuorumSize, final DigestType digestType, final byte[] passwd, final CreateCallback cb, final Object ctx) { asyncCreateLedger(ensSize, writeQuorumSize, writeQuorumSize, digestType, passwd, cb, ctx); } /** * Creates a new ledger asynchronously. Ledgers created with this call have * a separate write quorum and ack quorum size. The write quorum must be larger than * the ack quorum. * * Separating the write and the ack quorum allows the BookKeeper client to continue * writing when a bookie has failed but the failure has not yet been detected. Detecting * a bookie has failed can take a number of seconds, as configured by the read timeout * {@link ClientConfiguration#getReadTimeout()}. Once the bookie failure is detected, * that bookie will be removed from the ensemble. * * The other parameters match those of {@link #asyncCreateLedger(int, int, DigestType, byte[], * AsyncCallback.CreateCallback, Object)} * * @param ensSize * number of bookies over which to stripe entries * @param writeQuorumSize * number of bookies each entry will be written to * @param ackQuorumSize * number of bookies which must acknowledge an entry before the call is completed * @param digestType * digest type, either MAC or CRC32 * @param passwd * password * @param cb * createCallback implementation * @param ctx * optional control object */ public void asyncCreateLedger(final int ensSize, final int writeQuorumSize, final int ackQuorumSize, final DigestType digestType, final byte[] passwd, final CreateCallback cb, final Object ctx) { if (writeQuorumSize < ackQuorumSize) { throw new IllegalArgumentException("Write quorum must be larger than ack quorum"); } new LedgerCreateOp(BookKeeper.this, ensSize, writeQuorumSize, ackQuorumSize, digestType, passwd, cb, ctx) .initiate(); } /** * Creates a new ledger. Default of 3 servers, and quorum of 2 servers. * * @param digestType * digest type, either MAC or CRC32 * @param passwd * password * @return a handle to the newly created ledger * @throws InterruptedException * @throws BKException */ public LedgerHandle createLedger(DigestType digestType, byte passwd[]) throws BKException, InterruptedException { return createLedger(3, 2, digestType, passwd); } /** * Synchronous call to create ledger. Parameters match those of * {@link #asyncCreateLedger(int, int, DigestType, byte[], * AsyncCallback.CreateCallback, Object)} * * @param ensSize * @param qSize * @param digestType * @param passwd * @return a handle to the newly created ledger * @throws InterruptedException * @throws BKException */ public LedgerHandle createLedger(int ensSize, int qSize, DigestType digestType, byte passwd[]) throws InterruptedException, BKException { return createLedger(ensSize, qSize, qSize, digestType, passwd); } /** * Synchronous call to create ledger. Parameters match those of * {@link #asyncCreateLedger(int, int, int, DigestType, byte[], * AsyncCallback.CreateCallback, Object)} * * @param ensSize * @param writeQuorumSize * @param ackQuorumSize * @param digestType * @param passwd * @return a handle to the newly created ledger * @throws InterruptedException * @throws BKException */ public LedgerHandle createLedger(int ensSize, int writeQuorumSize, int ackQuorumSize, DigestType digestType, byte passwd[]) throws InterruptedException, BKException { SyncCounter counter = new SyncCounter(); counter.inc(); /* * Calls asynchronous version */ asyncCreateLedger(ensSize, writeQuorumSize, ackQuorumSize, digestType, passwd, new SyncCreateCallback(), counter); /* * Wait */ counter.block(0); if (counter.getrc() != BKException.Code.OK) { LOG.error("Error while creating ledger : {}", counter.getrc()); throw BKException.create(counter.getrc()); } else if (counter.getLh() == null) { LOG.error("Unexpected condition : no ledger handle returned for a success ledger creation"); throw BKException.create(BKException.Code.UnexpectedConditionException); } return counter.getLh(); } /** * Open existing ledger asynchronously for reading. * * Opening a ledger with this method invokes fencing and recovery on the ledger * if the ledger has not been closed. Fencing will block all other clients from * writing to the ledger. Recovery will make sure that the ledger is closed * before reading from it. * * Recovery also makes sure that any entries which reached one bookie, but not a * quorum, will be replicated to a quorum of bookies. This occurs in cases were * the writer of a ledger crashes after sending a write request to one bookie but * before being able to send it to the rest of the bookies in the quorum. * * If the ledger is already closed, neither fencing nor recovery will be applied. * * @see LedgerHandle#asyncClose * * @param lId * ledger identifier * @param digestType * digest type, either MAC or CRC32 * @param passwd * password * @param ctx * optional control object */ public void asyncOpenLedger(final long lId, final DigestType digestType, final byte passwd[], final OpenCallback cb, final Object ctx) { new LedgerOpenOp(BookKeeper.this, lId, digestType, passwd, cb, ctx).initiate(); } /** * Open existing ledger asynchronously for reading, but it does not try to * recover the ledger if it is not yet closed. The application needs to use * it carefully, since the writer might have crashed and ledger will remain * unsealed forever if there is no external mechanism to detect the failure * of the writer and the ledger is not open in a safe manner, invoking the * recovery procedure. * * Opening a ledger without recovery does not fence the ledger. As such, other * clients can continue to write to the ledger. * * This method returns a read only ledger handle. It will not be possible * to add entries to the ledger. Any attempt to add entries will throw an * exception. * * Reads from the returned ledger will only be able to read entries up until * the lastConfirmedEntry at the point in time at which the ledger was opened. * * @param lId * ledger identifier * @param digestType * digest type, either MAC or CRC32 * @param passwd * password * @param ctx * optional control object */ public void asyncOpenLedgerNoRecovery(final long lId, final DigestType digestType, final byte passwd[], final OpenCallback cb, final Object ctx) { new LedgerOpenOp(BookKeeper.this, lId, digestType, passwd, cb, ctx).initiateWithoutRecovery(); } /** * Synchronous open ledger call * * @see #asyncOpenLedger * @param lId * ledger identifier * @param digestType * digest type, either MAC or CRC32 * @param passwd * password * @return a handle to the open ledger * @throws InterruptedException * @throws BKException */ public LedgerHandle openLedger(long lId, DigestType digestType, byte passwd[]) throws BKException, InterruptedException { SyncCounter counter = new SyncCounter(); counter.inc(); /* * Calls async open ledger */ asyncOpenLedger(lId, digestType, passwd, new SyncOpenCallback(), counter); /* * Wait */ counter.block(0); if (counter.getrc() != BKException.Code.OK) throw BKException.create(counter.getrc()); return counter.getLh(); } /** * Synchronous, unsafe open ledger call * * @see #asyncOpenLedgerNoRecovery * @param lId * ledger identifier * @param digestType * digest type, either MAC or CRC32 * @param passwd * password * @return a handle to the open ledger * @throws InterruptedException * @throws BKException */ public LedgerHandle openLedgerNoRecovery(long lId, DigestType digestType, byte passwd[]) throws BKException, InterruptedException { SyncCounter counter = new SyncCounter(); counter.inc(); /* * Calls async open ledger */ asyncOpenLedgerNoRecovery(lId, digestType, passwd, new SyncOpenCallback(), counter); /* * Wait */ counter.block(0); if (counter.getrc() != BKException.Code.OK) throw BKException.create(counter.getrc()); return counter.getLh(); } /** * Deletes a ledger asynchronously. * * @param lId * ledger Id * @param cb * deleteCallback implementation * @param ctx * optional control object */ public void asyncDeleteLedger(final long lId, final DeleteCallback cb, final Object ctx) { new LedgerDeleteOp(BookKeeper.this, lId, cb, ctx).initiate(); } /** * Synchronous call to delete a ledger. Parameters match those of * {@link #asyncDeleteLedger(long, AsyncCallback.DeleteCallback, Object)} * * @param lId * ledgerId * @throws InterruptedException * @throws BKException.BKNoSuchLedgerExistsException if the ledger doesn't exist * @throws BKException */ public void deleteLedger(long lId) throws InterruptedException, BKException { SyncCounter counter = new SyncCounter(); counter.inc(); // Call asynchronous version asyncDeleteLedger(lId, new SyncDeleteCallback(), counter); // Wait counter.block(0); if (counter.getrc() != BKException.Code.OK) { LOG.error("Error deleting ledger " + lId + " : " + counter.getrc()); throw BKException.create(counter.getrc()); } } /** * Check asynchronously whether the ledger with identifier lId * has been closed. * * @param lId ledger identifier * @param cb callback method */ public void asyncIsClosed(long lId, final IsClosedCallback cb, final Object ctx){ ledgerManager.readLedgerMetadata(lId, new GenericCallback(){ public void operationComplete(int rc, LedgerMetadata lm){ if (rc == BKException.Code.OK) { cb.isClosedComplete(rc, lm.isClosed(), ctx); } else { cb.isClosedComplete(rc, false, ctx); } } }); } /** * Check whether the ledger with identifier lId * has been closed. * * @param lId * @return boolean true if ledger has been closed * @throws BKException */ public boolean isClosed(long lId) throws BKException, InterruptedException { final class Result { int rc; boolean isClosed; final CountDownLatch notifier = new CountDownLatch(1); } final Result result = new Result(); final IsClosedCallback cb = new IsClosedCallback(){ public void isClosedComplete(int rc, boolean isClosed, Object ctx){ result.isClosed = isClosed; result.rc = rc; result.notifier.countDown(); } }; /* * Call asynchronous version of isClosed */ asyncIsClosed(lId, cb, null); /* * Wait for callback */ result.notifier.await(); if (result.rc != BKException.Code.OK) { throw BKException.create(result.rc); } return result.isClosed; } /** * Shuts down client. * */ public void close() throws InterruptedException, BKException { scheduler.shutdown(); if (!scheduler.awaitTermination(10, TimeUnit.SECONDS)) { LOG.warn("The scheduler did not shutdown cleanly"); } mainWorkerPool.shutdown(); if (!mainWorkerPool.awaitTermination(10, TimeUnit.SECONDS)) { LOG.warn("The mainWorkerPool did not shutdown cleanly"); } bookieClient.close(); try { ledgerManager.close(); ledgerManagerFactory.uninitialize(); } catch (IOException ie) { LOG.error("Failed to close ledger manager : ", ie); } if (ownChannelFactory) { channelFactory.releaseExternalResources(); } if (ownZKHandle) { zk.close(); } } private static class SyncCreateCallback implements CreateCallback { /** * Create callback implementation for synchronous create call. * * @param rc * return code * @param lh * ledger handle object * @param ctx * optional control object */ public void createComplete(int rc, LedgerHandle lh, Object ctx) { SyncCounter counter = (SyncCounter) ctx; counter.setLh(lh); counter.setrc(rc); counter.dec(); } } static class SyncOpenCallback implements OpenCallback { /** * Callback method for synchronous open operation * * @param rc * return code * @param lh * ledger handle * @param ctx * optional control object */ public void openComplete(int rc, LedgerHandle lh, Object ctx) { SyncCounter counter = (SyncCounter) ctx; counter.setLh(lh); LOG.debug("Open complete: {}", rc); counter.setrc(rc); counter.dec(); } } private static class SyncDeleteCallback implements DeleteCallback { /** * Delete callback implementation for synchronous delete call. * * @param rc * return code * @param ctx * optional control object */ public void deleteComplete(int rc, Object ctx) { SyncCounter counter = (SyncCounter) ctx; counter.setrc(rc); counter.dec(); } } } BookKeeperAdmin.java000066400000000000000000001125701244507361200345640ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.io.IOException; import java.net.InetSocketAddress; import java.util.Collection; import java.util.ArrayList; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedList; import java.util.List; import java.util.Map; import java.util.NoSuchElementException; import java.util.Random; import java.util.UUID; import org.apache.bookkeeper.client.AsyncCallback.OpenCallback; import org.apache.bookkeeper.client.AsyncCallback.RecoverCallback; import org.apache.bookkeeper.client.BookKeeper.SyncOpenCallback; import org.apache.bookkeeper.client.LedgerFragmentReplicator.SingleFragmentCallback; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.meta.LedgerManager.LedgerRange; import org.apache.bookkeeper.meta.LedgerManager.LedgerRangeIterator; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.MultiCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.IOUtils; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZKUtil; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.ZooDefs.Ids; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Admin client for BookKeeper clusters */ public class BookKeeperAdmin { private static Logger LOG = LoggerFactory.getLogger(BookKeeperAdmin.class); // ZK client instance private ZooKeeper zk; // ZK ledgers related String constants private final String bookiesPath; // BookKeeper client instance private BookKeeper bkc; // LedgerFragmentReplicator instance private LedgerFragmentReplicator lfr; /* * Random number generator used to choose an available bookie server to * replicate data from a dead bookie. */ private Random rand = new Random(); /** * Constructor that takes in a ZooKeeper servers connect string so we know * how to connect to ZooKeeper to retrieve information about the BookKeeper * cluster. We need this before we can do any type of admin operations on * the BookKeeper cluster. * * @param zkServers * Comma separated list of hostname:port pairs for the ZooKeeper * servers cluster. * @throws IOException * throws this exception if there is an error instantiating the * ZooKeeper client. * @throws InterruptedException * Throws this exception if there is an error instantiating the * BookKeeper client. * @throws KeeperException * Throws this exception if there is an error instantiating the * BookKeeper client. */ public BookKeeperAdmin(String zkServers) throws IOException, InterruptedException, KeeperException { this(new ClientConfiguration().setZkServers(zkServers)); } /** * Constructor that takes in a configuration object so we know * how to connect to ZooKeeper to retrieve information about the BookKeeper * cluster. We need this before we can do any type of admin operations on * the BookKeeper cluster. * * @param conf * Client Configuration Object * @throws IOException * throws this exception if there is an error instantiating the * ZooKeeper client. * @throws InterruptedException * Throws this exception if there is an error instantiating the * BookKeeper client. * @throws KeeperException * Throws this exception if there is an error instantiating the * BookKeeper client. */ public BookKeeperAdmin(ClientConfiguration conf) throws IOException, InterruptedException, KeeperException { // Create the ZooKeeper client instance ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(conf.getZkTimeout()); zk = ZkUtils.createConnectedZookeeperClient(conf.getZkServers(), w); // Create the bookie path bookiesPath = conf.getZkAvailableBookiesPath(); // Create the BookKeeper client instance bkc = new BookKeeper(conf, zk); this.lfr = new LedgerFragmentReplicator(bkc); } /** * Constructor that takes in a BookKeeper instance . This will be useful, * when users already has bk instance ready. * * @param bkc * - bookkeeper instance */ public BookKeeperAdmin(final BookKeeper bkc) { this.bkc = bkc; this.zk = bkc.zk; this.bookiesPath = bkc.getConf().getZkAvailableBookiesPath(); this.lfr = new LedgerFragmentReplicator(bkc); } /** * Gracefully release resources that this client uses. * * @throws InterruptedException * if there is an error shutting down the clients that this * class uses. */ public void close() throws InterruptedException, BKException { bkc.close(); zk.close(); } /** * Get a list of the available bookies. * * @return a collection of bookie addresses */ public Collection getAvailableBookies() throws BKException { return bkc.bookieWatcher.getBookies(); } /** * Get a list of readonly bookies * * @return a collection of bookie addresses */ public Collection getReadOnlyBookies() { return bkc.bookieWatcher.getReadOnlyBookies(); } /** * Notify when the available list of bookies changes. * This is a one-shot notification. To receive subsequent notifications * the listener must be registered again. * * @param listener the listener to notify */ public void notifyBookiesChanged(final BookiesListener listener) throws BKException { bkc.bookieWatcher.notifyBookiesChanged(listener); } /** * Open a ledger as an administrator. This means that no digest password * checks are done. Otherwise, the call is identical to BookKeeper#asyncOpenLedger * * @param lId * ledger identifier * @param cb * Callback which will receive a LedgerHandle object * @param ctx * optional context object, to be passwd to the callback (can be null) * * @see BookKeeper#asyncOpenLedger */ public void asyncOpenLedger(final long lId, final OpenCallback cb, final Object ctx) { new LedgerOpenOp(bkc, lId, cb, ctx).initiate(); } /** * Open a ledger as an administrator. This means that no digest password * checks are done. Otherwise, the call is identical to * BookKeeper#openLedger * * @param lId * - ledger identifier * @see BookKeeper#openLedger */ public LedgerHandle openLedger(final long lId) throws InterruptedException, BKException { SyncCounter counter = new SyncCounter(); counter.inc(); new LedgerOpenOp(bkc, lId, new SyncOpenCallback(), counter).initiate(); /* * Wait */ counter.block(0); if (counter.getrc() != BKException.Code.OK) { throw BKException.create(counter.getrc()); } return counter.getLh(); } /** * Open a ledger as an administrator without recovering the ledger. This means * that no digest password checks are done. Otherwise, the call is identical * to BookKeeper#asyncOpenLedgerNoRecovery * * @param lId * ledger identifier * @param cb * Callback which will receive a LedgerHandle object * @param ctx * optional context object, to be passwd to the callback (can be null) * * @see BookKeeper#asyncOpenLedgerNoRecovery */ public void asyncOpenLedgerNoRecovery(final long lId, final OpenCallback cb, final Object ctx) { new LedgerOpenOp(bkc, lId, cb, ctx).initiateWithoutRecovery(); } /** * Open a ledger as an administrator without recovering the ledger. This * means that no digest password checks are done. Otherwise, the call is * identical to BookKeeper#openLedgerNoRecovery * * @param lId * ledger identifier * @see BookKeeper#openLedgerNoRecovery */ public LedgerHandle openLedgerNoRecovery(final long lId) throws InterruptedException, BKException { SyncCounter counter = new SyncCounter(); counter.inc(); new LedgerOpenOp(bkc, lId, new SyncOpenCallback(), counter) .initiateWithoutRecovery(); /* * Wait */ counter.block(0); if (counter.getrc() != BKException.Code.OK) { throw BKException.create(counter.getrc()); } return counter.getLh(); } // Object used for calling async methods and waiting for them to complete. static class SyncObject { boolean value; int rc; public SyncObject() { value = false; rc = BKException.Code.OK; } } /** * Synchronous method to rebuild and recover the ledger fragments data that * was stored on the source bookie. That bookie could have failed completely * and now the ledger data that was stored on it is under replicated. An * optional destination bookie server could be given if we want to copy all * of the ledger fragments data on the failed source bookie to it. * Otherwise, we will just randomly distribute the ledger fragments to the * active set of bookies, perhaps based on load. All ZooKeeper ledger * metadata will be updated to point to the new bookie(s) that contain the * replicated ledger fragments. * * @param bookieSrc * Source bookie that had a failure. We want to replicate the * ledger fragments that were stored there. * @param bookieDest * Optional destination bookie that if passed, we will copy all * of the ledger fragments from the source bookie over to it. */ public void recoverBookieData(final InetSocketAddress bookieSrc, final InetSocketAddress bookieDest) throws InterruptedException, BKException { SyncObject sync = new SyncObject(); // Call the async method to recover bookie data. asyncRecoverBookieData(bookieSrc, bookieDest, new RecoverCallback() { @Override public void recoverComplete(int rc, Object ctx) { LOG.info("Recover bookie operation completed with rc: " + rc); SyncObject syncObj = (SyncObject) ctx; synchronized (syncObj) { syncObj.rc = rc; syncObj.value = true; syncObj.notify(); } } }, sync); // Wait for the async method to complete. synchronized (sync) { while (sync.value == false) { sync.wait(); } } if (sync.rc != BKException.Code.OK) { throw BKException.create(sync.rc); } } /** * Async method to rebuild and recover the ledger fragments data that was * stored on the source bookie. That bookie could have failed completely and * now the ledger data that was stored on it is under replicated. An * optional destination bookie server could be given if we want to copy all * of the ledger fragments data on the failed source bookie to it. * Otherwise, we will just randomly distribute the ledger fragments to the * active set of bookies, perhaps based on load. All ZooKeeper ledger * metadata will be updated to point to the new bookie(s) that contain the * replicated ledger fragments. * * @param bookieSrc * Source bookie that had a failure. We want to replicate the * ledger fragments that were stored there. * @param bookieDest * Optional destination bookie that if passed, we will copy all * of the ledger fragments from the source bookie over to it. * @param cb * RecoverCallback to invoke once all of the data on the dead * bookie has been recovered and replicated. * @param context * Context for the RecoverCallback to call. */ public void asyncRecoverBookieData(final InetSocketAddress bookieSrc, final InetSocketAddress bookieDest, final RecoverCallback cb, final Object context) { // Sync ZK to make sure we're reading the latest bookie data. zk.sync(bookiesPath, new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (rc != Code.OK.intValue()) { LOG.error("ZK error syncing: ", KeeperException.create(KeeperException.Code.get(rc), path)); cb.recoverComplete(BKException.Code.ZKException, context); return; } getAvailableBookies(bookieSrc, bookieDest, cb, context); }; }, null); } /** * This method asynchronously gets the set of available Bookies that the * dead input bookie's data will be copied over into. If the user passed in * a specific destination bookie, then just use that one. Otherwise, we'll * randomly pick one of the other available bookies to use for each ledger * fragment we are replicating. * * @param bookieSrc * Source bookie that had a failure. We want to replicate the * ledger fragments that were stored there. * @param bookieDest * Optional destination bookie that if passed, we will copy all * of the ledger fragments from the source bookie over to it. * @param cb * RecoverCallback to invoke once all of the data on the dead * bookie has been recovered and replicated. * @param context * Context for the RecoverCallback to call. */ private void getAvailableBookies(final InetSocketAddress bookieSrc, final InetSocketAddress bookieDest, final RecoverCallback cb, final Object context) { final List availableBookies = new LinkedList(); if (bookieDest != null) { availableBookies.add(bookieDest); // Now poll ZK to get the active ledgers getActiveLedgers(bookieSrc, bookieDest, cb, context, availableBookies); } else { zk.getChildren(bookiesPath, null, new AsyncCallback.ChildrenCallback() { @Override public void processResult(int rc, String path, Object ctx, List children) { if (rc != Code.OK.intValue()) { LOG.error("ZK error getting bookie nodes: ", KeeperException.create(KeeperException.Code .get(rc), path)); cb.recoverComplete(BKException.Code.ZKException, context); return; } for (String bookieNode : children) { if (BookKeeperConstants.READONLY .equals(bookieNode)) { // exclude the readonly node from available bookies. continue; } String parts[] = bookieNode.split(BookKeeperConstants.COLON); if (parts.length < 2) { LOG.error("Bookie Node retrieved from ZK has invalid name format: " + bookieNode); cb.recoverComplete(BKException.Code.ZKException, context); return; } availableBookies.add(new InetSocketAddress(parts[0], Integer.parseInt(parts[1]))); } // Now poll ZK to get the active ledgers getActiveLedgers(bookieSrc, null, cb, context, availableBookies); } }, null); } } /** * This method asynchronously polls ZK to get the current set of active * ledgers. From this, we can open each ledger and look at the metadata to * determine if any of the ledger fragments for it were stored at the dead * input bookie. * * @param bookieSrc * Source bookie that had a failure. We want to replicate the * ledger fragments that were stored there. * @param bookieDest * Optional destination bookie that if passed, we will copy all * of the ledger fragments from the source bookie over to it. * @param cb * RecoverCallback to invoke once all of the data on the dead * bookie has been recovered and replicated. * @param context * Context for the RecoverCallback to call. * @param availableBookies * List of Bookie Servers that are available to use for * replicating data on the failed bookie. This could contain a * single bookie server if the user explicitly chose a bookie * server to replicate data to. */ private void getActiveLedgers(final InetSocketAddress bookieSrc, final InetSocketAddress bookieDest, final RecoverCallback cb, final Object context, final List availableBookies) { // Wrapper class around the RecoverCallback so it can be used // as the final VoidCallback to process ledgers class RecoverCallbackWrapper implements AsyncCallback.VoidCallback { final RecoverCallback cb; RecoverCallbackWrapper(RecoverCallback cb) { this.cb = cb; } @Override public void processResult(int rc, String path, Object ctx) { cb.recoverComplete(rc, ctx); } } Processor ledgerProcessor = new Processor() { @Override public void process(Long ledgerId, AsyncCallback.VoidCallback iterCallback) { recoverLedger(bookieSrc, ledgerId, iterCallback, availableBookies); } }; bkc.getLedgerManager().asyncProcessLedgers( ledgerProcessor, new RecoverCallbackWrapper(cb), context, BKException.Code.OK, BKException.Code.LedgerRecoveryException); } /** * Get a new random bookie, but ensure that it isn't one that is already * in the ensemble for the ledger. */ private InetSocketAddress getNewBookie(final List bookiesAlreadyInEnsemble, final List availableBookies) throws BKException.BKNotEnoughBookiesException { ArrayList candidates = new ArrayList(); candidates.addAll(availableBookies); candidates.removeAll(bookiesAlreadyInEnsemble); if (candidates.size() == 0) { throw new BKException.BKNotEnoughBookiesException(); } return candidates.get(rand.nextInt(candidates.size())); } /** * This method asynchronously recovers a given ledger if any of the ledger * entries were stored on the failed bookie. * * @param bookieSrc * Source bookie that had a failure. We want to replicate the * ledger fragments that were stored there. * @param lId * Ledger id we want to recover. * @param ledgerIterCb * IterationCallback to invoke once we've recovered the current * ledger. * @param availableBookies * List of Bookie Servers that are available to use for * replicating data on the failed bookie. This could contain a * single bookie server if the user explicitly chose a bookie * server to replicate data to. */ private void recoverLedger(final InetSocketAddress bookieSrc, final long lId, final AsyncCallback.VoidCallback ledgerIterCb, final List availableBookies) { LOG.debug("Recovering ledger : {}", lId); asyncOpenLedgerNoRecovery(lId, new OpenCallback() { @Override public void openComplete(int rc, final LedgerHandle lh, Object ctx) { if (rc != Code.OK.intValue()) { LOG.error("BK error opening ledger: " + lId, BKException.create(rc)); ledgerIterCb.processResult(rc, null, null); return; } LedgerMetadata lm = lh.getLedgerMetadata(); if (!lm.isClosed() && lm.getEnsembles().size() > 0) { Long lastKey = lm.getEnsembles().lastKey(); ArrayList lastEnsemble = lm.getEnsembles().get(lastKey); // the original write has not removed faulty bookie from // current ledger ensemble. to avoid data loss issue in // the case of concurrent updates to the ensemble composition, // the recovery tool should first close the ledger if (lastEnsemble.contains(bookieSrc)) { // close opened non recovery ledger handle try { lh.close(); } catch (Exception ie) { LOG.warn("Error closing non recovery ledger handle for ledger " + lId, ie); } asyncOpenLedger(lId, new OpenCallback() { @Override public void openComplete(int newrc, final LedgerHandle newlh, Object newctx) { if (newrc != Code.OK.intValue()) { LOG.error("BK error close ledger: " + lId, BKException.create(newrc)); ledgerIterCb.processResult(newrc, null, null); return; } // do recovery recoverLedger(bookieSrc, lId, ledgerIterCb, availableBookies); } }, null); return; } } /* * This List stores the ledger fragments to recover indexed by * the start entry ID for the range. The ensembles TreeMap is * keyed off this. */ final List ledgerFragmentsToRecover = new LinkedList(); /* * This Map will store the start and end entry ID values for * each of the ledger fragment ranges. The only exception is the * current active fragment since it has no end yet. In the event * of a bookie failure, a new ensemble is created so the current * ensemble should not contain the dead bookie we are trying to * recover. */ Map ledgerFragmentsRange = new HashMap(); Long curEntryId = null; for (Map.Entry> entry : lh.getLedgerMetadata().getEnsembles() .entrySet()) { if (curEntryId != null) ledgerFragmentsRange.put(curEntryId, entry.getKey() - 1); curEntryId = entry.getKey(); if (entry.getValue().contains(bookieSrc)) { /* * Current ledger fragment has entries stored on the * dead bookie so we'll need to recover them. */ ledgerFragmentsToRecover.add(entry.getKey()); } } // add last ensemble otherwise if the failed bookie existed in // the last ensemble of a closed ledger. the entries belonged to // last ensemble would not be replicated. if (curEntryId != null) { ledgerFragmentsRange.put(curEntryId, lh.getLastAddConfirmed()); } /* * See if this current ledger contains any ledger fragment that * needs to be re-replicated. If not, then just invoke the * multiCallback and return. */ if (ledgerFragmentsToRecover.size() == 0) { ledgerIterCb.processResult(BKException.Code.OK, null, null); return; } /* * Multicallback for ledger. Once all fragments for the ledger have been recovered * trigger the ledgerIterCb */ MultiCallback ledgerFragmentsMcb = new MultiCallback(ledgerFragmentsToRecover.size(), ledgerIterCb, null, BKException.Code.OK, BKException.Code.LedgerRecoveryException); /* * Now recover all of the necessary ledger fragments * asynchronously using a MultiCallback for every fragment. */ for (final Long startEntryId : ledgerFragmentsToRecover) { Long endEntryId = ledgerFragmentsRange.get(startEntryId); InetSocketAddress newBookie = null; try { newBookie = getNewBookie(lh.getLedgerMetadata().getEnsembles().get(startEntryId), availableBookies); } catch (BKException.BKNotEnoughBookiesException bke) { ledgerFragmentsMcb.processResult(BKException.Code.NotEnoughBookiesException, null, null); continue; } if (LOG.isDebugEnabled()) { LOG.debug("Replicating fragment from [" + startEntryId + "," + endEntryId + "] of ledger " + lh.getId() + " to " + newBookie); } try { LedgerFragmentReplicator.SingleFragmentCallback cb = new LedgerFragmentReplicator.SingleFragmentCallback( ledgerFragmentsMcb, lh, startEntryId, bookieSrc, newBookie); ArrayList currentEnsemble = lh.getLedgerMetadata().getEnsemble(startEntryId); int bookieIndex = -1; if (null != currentEnsemble) { for (int i = 0; i < currentEnsemble.size(); i++) { if (currentEnsemble.get(i).equals(bookieSrc)) { bookieIndex = i; break; } } } LedgerFragment ledgerFragment = new LedgerFragment(lh, startEntryId, endEntryId, bookieIndex); asyncRecoverLedgerFragment(lh, ledgerFragment, cb, newBookie); } catch(InterruptedException e) { Thread.currentThread().interrupt(); return; } } } }, null); } /** * This method asynchronously recovers a ledger fragment which is a * contiguous portion of a ledger that was stored in an ensemble that * included the failed bookie. * * @param lh * - LedgerHandle for the ledger * @param lf * - LedgerFragment to replicate * @param ledgerFragmentMcb * - MultiCallback to invoke once we've recovered the current * ledger fragment. * @param newBookie * - New bookie we want to use to recover and replicate the * ledger entries that were stored on the failed bookie. */ private void asyncRecoverLedgerFragment(final LedgerHandle lh, final LedgerFragment ledgerFragment, final AsyncCallback.VoidCallback ledgerFragmentMcb, final InetSocketAddress newBookie) throws InterruptedException { lfr.replicate(lh, ledgerFragment, ledgerFragmentMcb, newBookie); } /** * Replicate the Ledger fragment to target Bookie passed. * * @param lh * - ledgerHandle * @param ledgerFragment * - LedgerFragment to replicate * @param targetBookieAddress * - target Bookie, to where entries should be replicated. */ public void replicateLedgerFragment(LedgerHandle lh, final LedgerFragment ledgerFragment, final InetSocketAddress targetBookieAddress) throws InterruptedException, BKException { SyncCounter syncCounter = new SyncCounter(); ResultCallBack resultCallBack = new ResultCallBack(syncCounter); SingleFragmentCallback cb = new SingleFragmentCallback(resultCallBack, lh, ledgerFragment.getFirstEntryId(), ledgerFragment .getAddress(), targetBookieAddress); syncCounter.inc(); asyncRecoverLedgerFragment(lh, ledgerFragment, cb, targetBookieAddress); syncCounter.block(0); if (syncCounter.getrc() != BKException.Code.OK) { throw BKException.create(syncCounter.getrc()); } } /** This is the class for getting the replication result */ static class ResultCallBack implements AsyncCallback.VoidCallback { private SyncCounter sync; public ResultCallBack(SyncCounter sync) { this.sync = sync; } @Override public void processResult(int rc, String s, Object obj) { sync.setrc(rc); sync.dec(); } } /** * Format the BookKeeper metadata in zookeeper * * @param isInteractive * Whether format should ask prompt for confirmation if old data * exists or not. * @param force * If non interactive and force is true, then old data will be * removed without prompt. * @return Returns true if format succeeds else false. */ public static boolean format(ClientConfiguration conf, boolean isInteractive, boolean force) throws Exception { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(conf.getZkTimeout()); ZooKeeper zkc = ZkUtils.createConnectedZookeeperClient( conf.getZkServers(), w); BookKeeper bkc = null; try { boolean ledgerRootExists = null != zkc.exists( conf.getZkLedgersRootPath(), false); boolean availableNodeExists = null != zkc.exists( conf.getZkAvailableBookiesPath(), false); // Create ledgers root node if not exists if (!ledgerRootExists) { zkc.create(conf.getZkLedgersRootPath(), "".getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } // create available bookies node if not exists if (!availableNodeExists) { zkc.create(conf.getZkAvailableBookiesPath(), "".getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } // If old data was there then confirm with admin. if (ledgerRootExists) { boolean confirm = false; if (!isInteractive) { // If non interactive and force is set, then delete old // data. if (force) { confirm = true; } else { confirm = false; } } else { // Confirm with the admin. confirm = IOUtils .confirmPrompt("Ledger root already exists. " +"Are you sure to format bookkeeper metadata? " +"This may cause data loss."); } if (!confirm) { LOG.error("BookKeeper metadata Format aborted!!"); return false; } } bkc = new BookKeeper(conf, zkc); // Format all ledger metadata layout bkc.ledgerManagerFactory.format(conf, zkc); // Clear the cookies try { ZKUtil.deleteRecursive(zkc, conf.getZkLedgersRootPath() + "/cookies"); } catch (KeeperException.NoNodeException e) { LOG.debug("cookies node not exists in zookeeper to delete"); } // Clear the INSTANCEID try { zkc.delete(conf.getZkLedgersRootPath() + "/" + BookKeeperConstants.INSTANCEID, -1); } catch (KeeperException.NoNodeException e) { LOG.debug("INSTANCEID not exists in zookeeper to delete"); } // create INSTANCEID String instanceId = UUID.randomUUID().toString(); zkc.create(conf.getZkLedgersRootPath() + "/" + BookKeeperConstants.INSTANCEID, instanceId.getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); LOG.info("Successfully formatted BookKeeper metadata"); } finally { if (null != bkc) { bkc.close(); } if (null != zkc) { zkc.close(); } } return true; } /** * This method returns an iterable object for the list of ledger identifiers of * the ledgers currently available. * * @return an iterable object for the list of ledger identifiers * @throws IOException if the list of ledger identifiers cannot be read from the * metadata store */ public Iterable listLedgers() throws IOException { final LedgerRangeIterator iterator = bkc.getLedgerManager().getLedgerRanges(); return new Iterable() { public Iterator iterator() { return new Iterator() { Iterator currentRange = null; @Override public boolean hasNext() { try { if (iterator.hasNext()) { LOG.info("I'm in this part of"); return true; } else if (currentRange != null) { if (currentRange.hasNext()) { return true; } } } catch (IOException e) { LOG.error("Error while checking if there is a next element", e); } return false; } @Override public Long next() throws NoSuchElementException { try{ if (currentRange == null) { currentRange = iterator.next().getLedgers().iterator(); } } catch (IOException e) { LOG.error("Error while reading the next element", e); throw new NoSuchElementException(e.getMessage()); } return currentRange.next(); } @Override public void remove() throws UnsupportedOperationException { throw new UnsupportedOperationException(); } }; } }; } /** * @return the metadata for the passed ledger handle */ public LedgerMetadata getLedgerMetadata(LedgerHandle lh) { return lh.getLedgerMetadata(); } } BookieWatcher.java000066400000000000000000000331041244507361200343060ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.IOException; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Collections; import java.util.Collection; import java.util.HashSet; import java.util.List; import java.util.Set; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.client.BKException.BKNotEnoughBookiesException; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.SafeRunnable; import org.apache.bookkeeper.util.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.AsyncCallback.ChildrenCallback; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.KeeperException.NodeExistsException; import org.apache.zookeeper.Watcher.Event.EventType; import org.apache.zookeeper.ZooDefs.Ids; /** * This class is responsible for maintaining a consistent view of what bookies * are available by reading Zookeeper (and setting watches on the bookie nodes). * When a bookie fails, the other parts of the code turn to this class to find a * replacement * */ class BookieWatcher implements Watcher, ChildrenCallback { static final Logger logger = LoggerFactory.getLogger(BookieWatcher.class); // Bookie registration path in ZK private final String bookieRegistrationPath; static final Set EMPTY_SET = new HashSet(); public static int ZK_CONNECT_BACKOFF_SEC = 1; final BookKeeper bk; HashSet knownBookies = new HashSet(); final ScheduledExecutorService scheduler; SafeRunnable reReadTask = new SafeRunnable() { @Override public void safeRun() { readBookies(); } }; private ReadOnlyBookieWatcher readOnlyBookieWatcher; public BookieWatcher(ClientConfiguration conf, ScheduledExecutorService scheduler, BookKeeper bk) throws KeeperException, InterruptedException { this.bk = bk; // ZK bookie registration path this.bookieRegistrationPath = conf.getZkAvailableBookiesPath(); this.scheduler = scheduler; readOnlyBookieWatcher = new ReadOnlyBookieWatcher(conf, bk); } void notifyBookiesChanged(final BookiesListener listener) throws BKException { try { bk.getZkHandle().getChildren(this.bookieRegistrationPath, new Watcher() { public void process(WatchedEvent event) { // listen children changed event from ZooKeeper if (event.getType() == EventType.NodeChildrenChanged) { listener.availableBookiesChanged(); } } }); } catch (KeeperException ke) { logger.error("Error registering watcher with zookeeper", ke); throw new BKException.ZKException(); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); logger.error("Interrupted registering watcher with zookeeper", ie); throw new BKException.BKInterruptedException(); } } public Collection getBookies() throws BKException { try { List children = bk.getZkHandle().getChildren(this.bookieRegistrationPath, false); children.remove(BookKeeperConstants.READONLY); return convertToBookieAddresses(children); } catch (KeeperException ke) { logger.error("Failed to get bookie list : ", ke); throw new BKException.ZKException(); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); logger.error("Interrupted reading bookie list", ie); throw new BKException.BKInterruptedException(); } } Collection getReadOnlyBookies() { return new HashSet(readOnlyBookieWatcher.getReadOnlyBookies()); } public void readBookies() { readBookies(this); } public void readBookies(ChildrenCallback callback) { bk.getZkHandle().getChildren(this.bookieRegistrationPath, this, callback, null); } @Override public void process(WatchedEvent event) { readBookies(); } @Override public void processResult(int rc, String path, Object ctx, List children) { if (rc != KeeperException.Code.OK.intValue()) { //logger.error("Error while reading bookies", KeeperException.create(Code.get(rc), path)); // try the read after a second again scheduler.schedule(reReadTask, ZK_CONNECT_BACKOFF_SEC, TimeUnit.SECONDS); return; } // Just exclude the 'readonly' znode to exclude r-o bookies from // available nodes list. children.remove(BookKeeperConstants.READONLY); HashSet newBookieAddrs = convertToBookieAddresses(children); final HashSet deadBookies; synchronized (this) { deadBookies = (HashSet)knownBookies.clone(); deadBookies.removeAll(newBookieAddrs); // No need to close readonly bookie clients. deadBookies.removeAll(readOnlyBookieWatcher.getReadOnlyBookies()); knownBookies = newBookieAddrs; } if (bk.getBookieClient() != null) { bk.getBookieClient().closeClients(deadBookies); } } private static HashSet convertToBookieAddresses(List children) { // Read the bookie addresses into a set for efficient lookup HashSet newBookieAddrs = new HashSet(); for (String bookieAddrString : children) { InetSocketAddress bookieAddr; try { bookieAddr = StringUtils.parseAddr(bookieAddrString); } catch (IOException e) { logger.error("Could not parse bookie address: " + bookieAddrString + ", ignoring this bookie"); continue; } newBookieAddrs.add(bookieAddr); } return newBookieAddrs; } /** * Blocks until bookies are read from zookeeper, used in the {@link BookKeeper} constructor. * @throws InterruptedException * @throws KeeperException */ public void readBookiesBlocking() throws InterruptedException, KeeperException { // Read readonly bookies first readOnlyBookieWatcher.readROBookiesBlocking(); final LinkedBlockingQueue queue = new LinkedBlockingQueue(); readBookies(new ChildrenCallback() { public void processResult(int rc, String path, Object ctx, List children) { try { BookieWatcher.this.processResult(rc, path, ctx, children); queue.put(rc); } catch (InterruptedException e) { logger.error("Interruped when trying to read bookies in a blocking fashion"); throw new RuntimeException(e); } } }); int rc = queue.take(); if (rc != KeeperException.Code.OK.intValue()) { throw KeeperException.create(Code.get(rc)); } } /** * Wrapper over the {@link #getAdditionalBookies(Set, int)} method when there is no exclusion list (or exisiting bookies) * @param numBookiesNeeded * @return * @throws BKNotEnoughBookiesException */ public ArrayList getNewBookies(int numBookiesNeeded) throws BKNotEnoughBookiesException { return getAdditionalBookies(EMPTY_SET, numBookiesNeeded); } /** * Wrapper over the {@link #getAdditionalBookies(Set, int)} method when you just need 1 extra bookie * @param existingBookies * @return * @throws BKNotEnoughBookiesException */ public InetSocketAddress getAdditionalBookie(List existingBookies) throws BKNotEnoughBookiesException { return getAdditionalBookies(new HashSet(existingBookies), 1).get(0); } /** * Returns additional bookies given an exclusion list and how many are needed * @param existingBookies * @param numAdditionalBookiesNeeded * @return * @throws BKNotEnoughBookiesException */ public ArrayList getAdditionalBookies(Set existingBookies, int numAdditionalBookiesNeeded) throws BKNotEnoughBookiesException { ArrayList newBookies = new ArrayList(); if (numAdditionalBookiesNeeded <= 0) { return newBookies; } List allBookies; synchronized (this) { allBookies = new ArrayList(knownBookies); } Collections.shuffle(allBookies); for (InetSocketAddress bookie : allBookies) { if (existingBookies.contains(bookie)) { continue; } newBookies.add(bookie); numAdditionalBookiesNeeded--; if (numAdditionalBookiesNeeded == 0) { return newBookies; } } throw new BKNotEnoughBookiesException(); } /** * Watcher implementation to watch the readonly bookies under * <available>/readonly */ private static class ReadOnlyBookieWatcher implements Watcher, ChildrenCallback { private final static Logger LOG = LoggerFactory.getLogger(ReadOnlyBookieWatcher.class); private HashSet readOnlyBookies = new HashSet(); private BookKeeper bk; private String readOnlyBookieRegPath; public ReadOnlyBookieWatcher(ClientConfiguration conf, BookKeeper bk) throws KeeperException, InterruptedException { this.bk = bk; readOnlyBookieRegPath = conf.getZkAvailableBookiesPath() + "/" + BookKeeperConstants.READONLY; if (null == bk.getZkHandle().exists(readOnlyBookieRegPath, false)) { try { bk.getZkHandle().create(readOnlyBookieRegPath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (NodeExistsException e) { // this node is just now created by someone. } } } @Override public void process(WatchedEvent event) { readROBookies(); } // read the readonly bookies in blocking fashion. Used only for first // time. void readROBookiesBlocking() throws InterruptedException, KeeperException { final LinkedBlockingQueue queue = new LinkedBlockingQueue(); readROBookies(new ChildrenCallback() { public void processResult(int rc, String path, Object ctx, List children) { try { ReadOnlyBookieWatcher.this.processResult(rc, path, ctx, children); queue.put(rc); } catch (InterruptedException e) { logger.error("Interruped when trying to read readonly bookies in a blocking fashion"); throw new RuntimeException(e); } } }); int rc = queue.take(); if (rc != KeeperException.Code.OK.intValue()) { throw KeeperException.create(Code.get(rc)); } } // Read children and register watcher for readonly bookies path void readROBookies(ChildrenCallback callback) { bk.getZkHandle().getChildren(this.readOnlyBookieRegPath, this, callback, null); } void readROBookies() { readROBookies(this); } @Override public void processResult(int rc, String path, Object ctx, List children) { if (rc != Code.OK.intValue()) { LOG.error("Not able to read readonly bookies : ", KeeperException.create(Code.get(rc))); return; } HashSet newReadOnlyBookies = convertToBookieAddresses(children); readOnlyBookies = newReadOnlyBookies; } // returns the readonly bookies public HashSet getReadOnlyBookies() { return readOnlyBookies; } } } BookiesListener.java000066400000000000000000000017241244507361200346640ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * Listener for the the available bookies changes. */ public interface BookiesListener { void availableBookiesChanged(); } CRC32DigestManager.java000066400000000000000000000031221244507361200347640ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.nio.ByteBuffer; import java.util.zip.CRC32; class CRC32DigestManager extends DigestManager { private final ThreadLocal crc = new ThreadLocal() { @Override protected CRC32 initialValue() { return new CRC32(); } }; public CRC32DigestManager(long ledgerId) { super(ledgerId); } @Override int getMacCodeLength() { return 8; } @Override byte[] getValueAndReset() { byte[] value = new byte[8]; ByteBuffer buf = ByteBuffer.wrap(value); buf.putLong(crc.get().getValue()); crc.get().reset(); return value; } @Override void update(byte[] data, int offset, int length) { crc.get().update(data, offset, length); } } DigestManager.java000066400000000000000000000157161244507361200343030ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.nio.ByteBuffer; import java.security.GeneralSecurityException; import org.apache.bookkeeper.client.BKException.BKDigestMatchException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBufferInputStream; import org.jboss.netty.buffer.ChannelBuffers; /** * This class takes an entry, attaches a digest to it and packages it with relevant * data so that it can be shipped to the bookie. On the return side, it also * gets a packet, checks that the digest matches, and extracts the original entry * for the packet. Currently 2 types of digests are supported: MAC (based on SHA-1) and CRC32 */ abstract class DigestManager { static final Logger logger = LoggerFactory.getLogger(DigestManager.class); static final int METADATA_LENGTH = 32; long ledgerId; abstract int getMacCodeLength(); void update(byte[] data) { update(data, 0, data.length); } abstract void update(byte[] data, int offset, int length); abstract byte[] getValueAndReset(); final int macCodeLength; public DigestManager(long ledgerId) { this.ledgerId = ledgerId; macCodeLength = getMacCodeLength(); } static DigestManager instantiate(long ledgerId, byte[] passwd, DigestType digestType) throws GeneralSecurityException { switch(digestType) { case MAC: return new MacDigestManager(ledgerId, passwd); case CRC32: return new CRC32DigestManager(ledgerId); default: throw new GeneralSecurityException("Unknown checksum type: " + digestType); } } /** * Computes the digest for an entry and put bytes together for sending. * * @param entryId * @param lastAddConfirmed * @param length * @param data * @return */ public ChannelBuffer computeDigestAndPackageForSending(long entryId, long lastAddConfirmed, long length, byte[] data, int doffset, int dlength) { byte[] bufferArray = new byte[METADATA_LENGTH + macCodeLength]; ByteBuffer buffer = ByteBuffer.wrap(bufferArray); buffer.putLong(ledgerId); buffer.putLong(entryId); buffer.putLong(lastAddConfirmed); buffer.putLong(length); buffer.flip(); update(buffer.array(), 0, METADATA_LENGTH); update(data, doffset, dlength); byte[] digest = getValueAndReset(); buffer.limit(buffer.capacity()); buffer.position(METADATA_LENGTH); buffer.put(digest); buffer.flip(); return ChannelBuffers.wrappedBuffer(ChannelBuffers.wrappedBuffer(buffer), ChannelBuffers.wrappedBuffer(data, doffset, dlength)); } private void verifyDigest(ChannelBuffer dataReceived) throws BKDigestMatchException { verifyDigest(LedgerHandle.INVALID_ENTRY_ID, dataReceived, true); } private void verifyDigest(long entryId, ChannelBuffer dataReceived) throws BKDigestMatchException { verifyDigest(entryId, dataReceived, false); } private void verifyDigest(long entryId, ChannelBuffer dataReceived, boolean skipEntryIdCheck) throws BKDigestMatchException { ByteBuffer dataReceivedBuffer = dataReceived.toByteBuffer(); byte[] digest; if ((METADATA_LENGTH + macCodeLength) > dataReceived.readableBytes()) { logger.error("Data received is smaller than the minimum for this digest type. " + " Either the packet it corrupt, or the wrong digest is configured. " + " Digest type: {}, Packet Length: {}", this.getClass().getName(), dataReceived.readableBytes()); throw new BKDigestMatchException(); } update(dataReceivedBuffer.array(), dataReceivedBuffer.position(), METADATA_LENGTH); int offset = METADATA_LENGTH + macCodeLength; update(dataReceivedBuffer.array(), dataReceivedBuffer.position() + offset, dataReceived.readableBytes() - offset); digest = getValueAndReset(); for (int i = 0; i < digest.length; i++) { if (digest[i] != dataReceived.getByte(METADATA_LENGTH + i)) { logger.error("Mac mismatch for ledger-id: " + ledgerId + ", entry-id: " + entryId); throw new BKDigestMatchException(); } } long actualLedgerId = dataReceived.readLong(); long actualEntryId = dataReceived.readLong(); if (actualLedgerId != ledgerId) { logger.error("Ledger-id mismatch in authenticated message, expected: " + ledgerId + " , actual: " + actualLedgerId); throw new BKDigestMatchException(); } if (!skipEntryIdCheck && actualEntryId != entryId) { logger.error("Entry-id mismatch in authenticated message, expected: " + entryId + " , actual: " + actualEntryId); throw new BKDigestMatchException(); } } /** * Verify that the digest matches and returns the data in the entry. * * @param entryId * @param dataReceived * @return * @throws BKDigestMatchException */ ChannelBufferInputStream verifyDigestAndReturnData(long entryId, ChannelBuffer dataReceived) throws BKDigestMatchException { verifyDigest(entryId, dataReceived); dataReceived.readerIndex(METADATA_LENGTH + macCodeLength); return new ChannelBufferInputStream(dataReceived); } static class RecoveryData { long lastAddConfirmed; long length; public RecoveryData(long lastAddConfirmed, long length) { this.lastAddConfirmed = lastAddConfirmed; this.length = length; } } RecoveryData verifyDigestAndReturnLastConfirmed(ChannelBuffer dataReceived) throws BKDigestMatchException { verifyDigest(dataReceived); dataReceived.readerIndex(8); dataReceived.readLong(); // skip unused entryId long lastAddConfirmed = dataReceived.readLong(); long length = dataReceived.readLong(); return new RecoveryData(lastAddConfirmed, length); } } DistributionSchedule.java000066400000000000000000000061311244507361200357140ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.client; import java.util.List; /** * This interface determins how entries are distributed among bookies. * * Every entry gets replicated to some number of replicas. The first replica for * an entry is given a replicaIndex of 0, and so on. To distribute write load, * not all entries go to all bookies. Given an entry-id and replica index, an * {@link DistributionSchedule} determines which bookie that replica should go * to. */ interface DistributionSchedule { /** * return the set of bookie indices to send the message to */ public List getWriteSet(long entryId); /** * An ack set represents the set of bookies from which * a response must be received so that an entry can be * considered to be replicated on a quorum. */ public interface AckSet { /** * Add a bookie response and check if quorum has been met * @return true if quorum has been met, false otherwise */ public boolean addBookieAndCheck(int bookieIndexHeardFrom); /** * Invalidate a previous bookie response. * Used for reissuing write requests. */ public void removeBookie(int bookie); } /** * Returns an ackset object, responses should be checked against this */ public AckSet getAckSet(); /** * Interface to keep track of which bookies in an ensemble, an action * has been performed for. */ public interface QuorumCoverageSet { /** * Add a bookie to the set, and check if all quorum in the set * have had the action performed for it. * @param bookieIndexHeardFrom Bookie we've just heard from * @return whether all quorums have been covered */ public boolean addBookieAndCheckCovered(int bookieIndexHeardFrom); } public QuorumCoverageSet getCoverageSet(); /** * Whether entry presents on given bookie index * * @param entryId * - entryId to check the presence on given bookie index * @param bookieIndex * - bookie index on which it need to check the possible presence * of the entry * @return true if it has entry otherwise false. */ public boolean hasEntry(long entryId, int bookieIndex); } LedgerChecker.java000066400000000000000000000260761244507361200342610ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.util.ArrayList; import java.util.HashSet; import java.util.Set; import java.util.Map; import org.apache.bookkeeper.proto.BookieClient; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.ReadEntryCallback; import org.jboss.netty.buffer.ChannelBuffer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.net.InetSocketAddress; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicInteger; /** *Checks the complete ledger and finds the UnderReplicated fragments if any */ public class LedgerChecker { private static Logger LOG = LoggerFactory.getLogger(LedgerChecker.class); public final BookieClient bookieClient; static class InvalidFragmentException extends Exception { private static final long serialVersionUID = 1467201276417062353L; } /** * This will collect all the entry read call backs and finally it will give * call back to previous call back API which is waiting for it once it meets * the expected call backs from down */ private static class ReadManyEntriesCallback implements ReadEntryCallback { AtomicBoolean completed = new AtomicBoolean(false); final AtomicLong numEntries; final LedgerFragment fragment; final GenericCallback cb; ReadManyEntriesCallback(long numEntries, LedgerFragment fragment, GenericCallback cb) { this.numEntries = new AtomicLong(numEntries); this.fragment = fragment; this.cb = cb; } public void readEntryComplete(int rc, long ledgerId, long entryId, ChannelBuffer buffer, Object ctx) { if (rc == BKException.Code.OK) { if (numEntries.decrementAndGet() == 0 && !completed.getAndSet(true)) { cb.operationComplete(rc, fragment); } } else if (!completed.getAndSet(true)) { cb.operationComplete(rc, fragment); } } } public LedgerChecker(BookKeeper bkc) { bookieClient = bkc.getBookieClient(); } private void verifyLedgerFragment(LedgerFragment fragment, GenericCallback cb) throws InvalidFragmentException { long firstStored = fragment.getFirstStoredEntryId(); long lastStored = fragment.getLastStoredEntryId(); if (firstStored == LedgerHandle.INVALID_ENTRY_ID) { if (lastStored != LedgerHandle.INVALID_ENTRY_ID) { throw new InvalidFragmentException(); } cb.operationComplete(BKException.Code.OK, fragment); return; } if (firstStored == lastStored) { ReadManyEntriesCallback manycb = new ReadManyEntriesCallback(1, fragment, cb); bookieClient.readEntry(fragment.getAddress(), fragment .getLedgerId(), firstStored, manycb, null); } else { ReadManyEntriesCallback manycb = new ReadManyEntriesCallback(2, fragment, cb); bookieClient.readEntry(fragment.getAddress(), fragment .getLedgerId(), firstStored, manycb, null); bookieClient.readEntry(fragment.getAddress(), fragment .getLedgerId(), lastStored, manycb, null); } } /** * Callback for checking whether an entry exists or not. * It is used to differentiate the cases where it has been written * but now cannot be read, and where it never has been written. */ private static class EntryExistsCallback implements ReadEntryCallback { AtomicBoolean entryMayExist = new AtomicBoolean(false); final AtomicInteger numReads; final GenericCallback cb; EntryExistsCallback(int numReads, GenericCallback cb) { this.numReads = new AtomicInteger(numReads); this.cb = cb; } public void readEntryComplete(int rc, long ledgerId, long entryId, ChannelBuffer buffer, Object ctx) { if (rc != BKException.Code.NoSuchEntryException) { entryMayExist.set(true); } if (numReads.decrementAndGet() == 0) { cb.operationComplete(rc, entryMayExist.get()); } } } /** * This will collect all the fragment read call backs and finally it will * give call back to above call back API which is waiting for it once it * meets the expected call backs from down */ private static class FullLedgerCallback implements GenericCallback { final Set badFragments; final AtomicLong numFragments; final GenericCallback> cb; FullLedgerCallback(long numFragments, GenericCallback> cb) { badFragments = new HashSet(); this.numFragments = new AtomicLong(numFragments); this.cb = cb; } public void operationComplete(int rc, LedgerFragment result) { if (rc != BKException.Code.OK) { badFragments.add(result); } if (numFragments.decrementAndGet() == 0) { cb.operationComplete(BKException.Code.OK, badFragments); } } } /** * Check that all the fragments in the passed in ledger, and report those * which are missing. */ public void checkLedger(LedgerHandle lh, final GenericCallback> cb) { // build a set of all fragment replicas final Set fragments = new HashSet(); Long curEntryId = null; ArrayList curEnsemble = null; for (Map.Entry> e : lh .getLedgerMetadata().getEnsembles().entrySet()) { if (curEntryId != null) { for (int i = 0; i < curEnsemble.size(); i++) { fragments.add(new LedgerFragment(lh, curEntryId, e.getKey() - 1, i)); } } curEntryId = e.getKey(); curEnsemble = e.getValue(); } /* Checking the last segment of the ledger can be complicated in some cases. * In the case that the ledger is closed, we can just check the fragments of * the segment as normal, except in the case that no entry was ever written, * to the ledger, in which case we check no fragments. * In the case that the ledger is open, but enough entries have been written, * for lastAddConfirmed to be set above the start entry of the segment, we * can also check as normal. * However, if lastAddConfirmed cannot be trusted, such as when it's lower than * the first entry id, or not set at all, we cannot be sure if there has been * data written to the segment. For this reason, we have to send a read request * to the bookies which should have the first entry. If they respond with * NoSuchEntry we can assume it was never written. If they respond with anything * else, we must assume the entry has been written, so we run the check. */ if (curEntryId != null && !(lh.getLedgerMetadata().isClosed() && lh.getLastAddConfirmed() < curEntryId)) { long lastEntry = lh.getLastAddConfirmed(); if (lastEntry < curEntryId) { lastEntry = curEntryId; } final Set finalSegmentFragments = new HashSet(); for (int i = 0; i < curEnsemble.size(); i++) { finalSegmentFragments.add(new LedgerFragment(lh, curEntryId, lastEntry, i)); } // Check for the case that no last confirmed entry has // been set. if (curEntryId == lastEntry) { final long entryToRead = curEntryId; EntryExistsCallback eecb = new EntryExistsCallback(lh.getLedgerMetadata().getWriteQuorumSize(), new GenericCallback() { public void operationComplete(int rc, Boolean result) { if (result) { fragments.addAll(finalSegmentFragments); } checkFragments(fragments, cb); } }); for (int bi : lh.getDistributionSchedule().getWriteSet(entryToRead)) { InetSocketAddress addr = curEnsemble.get(bi); bookieClient.readEntry(addr, lh.getId(), entryToRead, eecb, null); } return; } else { fragments.addAll(finalSegmentFragments); } } checkFragments(fragments, cb); } private void checkFragments(Set fragments, GenericCallback> cb) { if (fragments.size() == 0) { // no fragments to verify cb.operationComplete(BKException.Code.OK, fragments); return; } // verify all the collected fragment replicas FullLedgerCallback allFragmentsCb = new FullLedgerCallback(fragments .size(), cb); for (LedgerFragment r : fragments) { LOG.debug("Checking fragment {}", r); try { verifyLedgerFragment(r, allFragmentsCb); } catch (InvalidFragmentException ife) { LOG.error("Invalid fragment found : {}", r); allFragmentsCb.operationComplete( BKException.Code.IncorrectParameterException, r); } } } } LedgerCreateOp.java000066400000000000000000000102551244507361200344070ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.net.InetSocketAddress; import java.security.GeneralSecurityException; import java.util.ArrayList; import org.apache.bookkeeper.client.AsyncCallback.CreateCallback; import org.apache.bookkeeper.client.BKException.BKNotEnoughBookiesException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Encapsulates asynchronous ledger create operation * */ class LedgerCreateOp implements GenericCallback { static final Logger LOG = LoggerFactory.getLogger(LedgerCreateOp.class); CreateCallback cb; LedgerMetadata metadata; LedgerHandle lh; Object ctx; byte[] passwd; BookKeeper bk; DigestType digestType; /** * Constructor * * @param bk * BookKeeper object * @param ensembleSize * ensemble size * @param quorumSize * quorum size * @param digestType * digest type, either MAC or CRC32 * @param passwd * passowrd * @param cb * callback implementation * @param ctx * optional control object */ LedgerCreateOp(BookKeeper bk, int ensembleSize, int writeQuorumSize, int ackQuorumSize, DigestType digestType, byte[] passwd, CreateCallback cb, Object ctx) { this.bk = bk; this.metadata = new LedgerMetadata(ensembleSize, writeQuorumSize, ackQuorumSize, digestType, passwd); this.digestType = digestType; this.passwd = passwd; this.cb = cb; this.ctx = ctx; } /** * Initiates the operation */ public void initiate() { // allocate ensemble first /* * Adding bookies to ledger handle */ ArrayList ensemble; try { ensemble = bk.bookieWatcher.getNewBookies(metadata.getEnsembleSize()); } catch (BKNotEnoughBookiesException e) { LOG.error("Not enough bookies to create ledger"); cb.createComplete(e.getCode(), null, this.ctx); return; } /* * Add ensemble to the configuration */ metadata.addEnsemble(0L, ensemble); // create a ledger with metadata bk.getLedgerManager().createLedger(metadata, this); } /** * Callback when created ledger. */ @Override public void operationComplete(int rc, Long ledgerId) { if (BKException.Code.OK != rc) { cb.createComplete(rc, null, this.ctx); return; } try { lh = new LedgerHandle(bk, ledgerId, metadata, digestType, passwd); } catch (GeneralSecurityException e) { LOG.error("Security exception while creating ledger: " + ledgerId, e); cb.createComplete(BKException.Code.DigestNotInitializedException, null, this.ctx); return; } catch (NumberFormatException e) { LOG.error("Incorrectly entered parameter throttle: " + bk.getConf().getThrottleValue(), e); cb.createComplete(BKException.Code.IncorrectParameterException, null, this.ctx); return; } // return the ledger handle back cb.createComplete(BKException.Code.OK, lh, this.ctx); } } LedgerDeleteOp.java000066400000000000000000000045011244507361200344030ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import org.apache.bookkeeper.client.AsyncCallback.DeleteCallback; import org.apache.bookkeeper.util.OrderedSafeExecutor.OrderedSafeGenericCallback; import org.apache.bookkeeper.versioning.Version; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Encapsulates asynchronous ledger delete operation * */ class LedgerDeleteOp extends OrderedSafeGenericCallback { static final Logger LOG = LoggerFactory.getLogger(LedgerDeleteOp.class); BookKeeper bk; long ledgerId; DeleteCallback cb; Object ctx; /** * Constructor * * @param bk * BookKeeper object * @param ledgerId * ledger Id * @param cb * callback implementation * @param ctx * optional control object */ LedgerDeleteOp(BookKeeper bk, long ledgerId, DeleteCallback cb, Object ctx) { super(bk.mainWorkerPool, ledgerId); this.bk = bk; this.ledgerId = ledgerId; this.cb = cb; this.ctx = ctx; } /** * Initiates the operation */ public void initiate() { // Asynchronously delete the ledger from meta manager // When this completes, it will invoke the callback method below. bk.getLedgerManager().removeLedgerMetadata(ledgerId, Version.ANY, this); } /** * Implements Delete Callback. */ @Override public void safeOperationComplete(int rc, Void result) { cb.deleteComplete(rc, this.ctx); } } LedgerEntry.java000066400000000000000000000046631244507361200340140ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.IOException; import java.io.InputStream; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.buffer.ChannelBufferInputStream; /** * Ledger entry. Its a simple tuple containing the ledger id, the entry-id, and * the entry content. * */ public class LedgerEntry { Logger LOG = LoggerFactory.getLogger(LedgerEntry.class); long ledgerId; long entryId; long length; ChannelBufferInputStream entryDataStream; LedgerEntry(long lId, long eId) { this.ledgerId = lId; this.entryId = eId; } public long getLedgerId() { return ledgerId; } public long getEntryId() { return entryId; } public long getLength() { return length; } public byte[] getEntry() { try { // In general, you can't rely on the available() method of an input // stream, but ChannelBufferInputStream is backed by a byte[] so it // accurately knows the # bytes available byte[] ret = new byte[entryDataStream.available()]; entryDataStream.readFully(ret); return ret; } catch (IOException e) { // The channelbufferinput stream doesnt really throw the // ioexceptions, it just has to be in the signature because // InputStream says so. Hence this code, should never be reached. LOG.error("Unexpected IOException while reading from channel buffer", e); return new byte[0]; } } public InputStream getEntryInputStream() { return entryDataStream; } } LedgerFragment.java000066400000000000000000000111151244507361200344440ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.List; import java.util.SortedMap; /** * Represents the entries of a segment of a ledger which are stored on a single * bookie in the segments bookie ensemble. * * Used for checking and recovery */ public class LedgerFragment { private final int bookieIndex; private final List ensemble; private final long firstEntryId; private final long lastKnownEntryId; private final long ledgerId; private final DistributionSchedule schedule; private final boolean isLedgerClosed; LedgerFragment(LedgerHandle lh, long firstEntryId, long lastKnownEntryId, int bookieIndex) { this.ledgerId = lh.getId(); this.firstEntryId = firstEntryId; this.lastKnownEntryId = lastKnownEntryId; this.bookieIndex = bookieIndex; this.ensemble = lh.getLedgerMetadata().getEnsemble(firstEntryId); this.schedule = lh.getDistributionSchedule(); SortedMap> ensembles = lh .getLedgerMetadata().getEnsembles(); this.isLedgerClosed = lh.getLedgerMetadata().isClosed() || !ensemble.equals(ensembles.get(ensembles.lastKey())); } /** * Returns true, if and only if the ledger fragment will never be modified * by any of the clients in future, otherwise false. i.e, *
    *
  1. If ledger is in closed state, then no other clients can modify this * fragment.
  2. *
  3. If ledger is not in closed state and the current fragment is not a * last fragment, then no one will modify this fragment.
  4. *
*/ public boolean isClosed() { return isLedgerClosed; } long getLedgerId() { return ledgerId; } long getFirstEntryId() { return firstEntryId; } long getLastKnownEntryId() { return lastKnownEntryId; } /** * Gets the failedBookie address */ public InetSocketAddress getAddress() { return ensemble.get(bookieIndex); } /** * Gets the failedBookie index */ public int getBookiesIndex() { return bookieIndex; } /** * Gets the first stored entry id of the fragment in failed bookie. * * @return entryId */ public long getFirstStoredEntryId() { long firstEntry = firstEntryId; for (int i = 0; i < ensemble.size() && firstEntry <= lastKnownEntryId; i++) { if (schedule.hasEntry(firstEntry, bookieIndex)) { return firstEntry; } else { firstEntry++; } } return LedgerHandle.INVALID_ENTRY_ID; } /** * Gets the last stored entry id of the fragment in failed bookie. * * @return entryId */ public long getLastStoredEntryId() { long lastEntry = lastKnownEntryId; for (int i = 0; i < ensemble.size() && lastEntry >= firstEntryId; i++) { if (schedule.hasEntry(lastEntry, bookieIndex)) { return lastEntry; } else { lastEntry--; } } return LedgerHandle.INVALID_ENTRY_ID; } /** * Gets the ensemble of fragment * * @return the ensemble for the segment which this fragment is a part of */ public List getEnsemble() { return this.ensemble; } @Override public String toString() { return String.format("Fragment(LedgerID: %d, FirstEntryID: %d[%d], " + "LastKnownEntryID: %d[%d], Host: %s, Closed: %s)", ledgerId, firstEntryId, getFirstStoredEntryId(), lastKnownEntryId, getLastStoredEntryId(), getAddress(), isLedgerClosed); } }LedgerFragmentReplicator.java000066400000000000000000000455761244507361200365130ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Enumeration; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedList; import java.util.List; import java.util.Set; import org.apache.bookkeeper.client.AsyncCallback.ReadCallback; import org.apache.bookkeeper.proto.BookieProtocol; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.MultiCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.util.OrderedSafeExecutor.OrderedSafeGenericCallback; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.KeeperException.Code; import org.jboss.netty.buffer.ChannelBuffer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This is the helper class for replicating the fragments from one bookie to * another */ public class LedgerFragmentReplicator { // BookKeeper instance private BookKeeper bkc; public LedgerFragmentReplicator(BookKeeper bkc) { this.bkc = bkc; } private static Logger LOG = LoggerFactory .getLogger(LedgerFragmentReplicator.class); private void replicateFragmentInternal(final LedgerHandle lh, final LedgerFragment lf, final AsyncCallback.VoidCallback ledgerFragmentMcb, final InetSocketAddress newBookie) throws InterruptedException { if (!lf.isClosed()) { LOG.error("Trying to replicate an unclosed fragment;" + " This is not safe {}", lf); ledgerFragmentMcb.processResult(BKException.Code.UnclosedFragmentException, null, null); return; } Long startEntryId = lf.getFirstStoredEntryId(); Long endEntryId = lf.getLastStoredEntryId(); if (endEntryId == null) { /* * Ideally this should never happen if bookie failure is taken care * of properly. Nothing we can do though in this case. */ LOG.warn("Dead bookie (" + lf.getAddress() + ") is still part of the current" + " active ensemble for ledgerId: " + lh.getId()); ledgerFragmentMcb.processResult(BKException.Code.OK, null, null); return; } if (startEntryId > endEntryId) { // for open ledger which there is no entry, the start entry id is 0, // the end entry id is -1. // we can return immediately to trigger forward read ledgerFragmentMcb.processResult(BKException.Code.OK, null, null); return; } /* * Add all the entries to entriesToReplicate list from * firstStoredEntryId to lastStoredEntryID. */ List entriesToReplicate = new LinkedList(); long lastStoredEntryId = lf.getLastStoredEntryId(); for (long i = lf.getFirstStoredEntryId(); i <= lastStoredEntryId; i++) { entriesToReplicate.add(i); } /* * Now asynchronously replicate all of the entries for the ledger * fragment that were on the dead bookie. */ MultiCallback ledgerFragmentEntryMcb = new MultiCallback( entriesToReplicate.size(), ledgerFragmentMcb, null, BKException.Code.OK, BKException.Code.LedgerRecoveryException); for (final Long entryId : entriesToReplicate) { recoverLedgerFragmentEntry(entryId, lh, ledgerFragmentEntryMcb, newBookie); } } /** * This method replicate a ledger fragment which is a contiguous portion of * a ledger that was stored in an ensemble that included the failed bookie. * It will Splits the fragment into multiple sub fragments by keeping the * max entries up to the configured value of rereplicationEntryBatchSize and * then it re-replicates that batched entry fragments one by one. After * re-replication of all batched entry fragments, it will update the * ensemble info with new Bookie once * * @param lh * LedgerHandle for the ledger * @param lf * LedgerFragment to replicate * @param ledgerFragmentMcb * MultiCallback to invoke once we've recovered the current * ledger fragment. * @param targetBookieAddress * New bookie we want to use to recover and replicate the ledger * entries that were stored on the failed bookie. */ void replicate(final LedgerHandle lh, final LedgerFragment lf, final AsyncCallback.VoidCallback ledgerFragmentMcb, final InetSocketAddress targetBookieAddress) throws InterruptedException { Set partionedFragments = splitIntoSubFragments(lh, lf, bkc.getConf().getRereplicationEntryBatchSize()); LOG.info("Fragment :" + lf + " is split into sub fragments :" + partionedFragments); replicateNextBatch(lh, partionedFragments.iterator(), ledgerFragmentMcb, targetBookieAddress); } /** Replicate the batched entry fragments one after other */ private void replicateNextBatch(final LedgerHandle lh, final Iterator fragments, final AsyncCallback.VoidCallback ledgerFragmentMcb, final InetSocketAddress targetBookieAddress) { if (fragments.hasNext()) { try { replicateFragmentInternal(lh, fragments.next(), new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String v, Object ctx) { if (rc != BKException.Code.OK) { ledgerFragmentMcb.processResult(rc, null, null); } else { replicateNextBatch(lh, fragments, ledgerFragmentMcb, targetBookieAddress); } } }, targetBookieAddress); } catch (InterruptedException e) { ledgerFragmentMcb.processResult( BKException.Code.InterruptedException, null, null); Thread.currentThread().interrupt(); } } else { ledgerFragmentMcb.processResult(BKException.Code.OK, null, null); } } /** * Split the full fragment into batched entry fragments by keeping * rereplicationEntryBatchSize of entries in each one and can treat them as * sub fragments */ static Set splitIntoSubFragments(LedgerHandle lh, LedgerFragment ledgerFragment, long rereplicationEntryBatchSize) { Set fragments = new HashSet(); if (rereplicationEntryBatchSize <= 0) { // rereplicationEntryBatchSize can not be 0 or less than 0, // returning with the current fragment fragments.add(ledgerFragment); return fragments; } long firstEntryId = ledgerFragment.getFirstStoredEntryId(); long lastEntryId = ledgerFragment.getLastStoredEntryId(); long numberOfEntriesToReplicate = (lastEntryId - firstEntryId) + 1; long splitsWithFullEntries = numberOfEntriesToReplicate / rereplicationEntryBatchSize; if (splitsWithFullEntries == 0) {// only one fragment fragments.add(ledgerFragment); return fragments; } long fragmentSplitLastEntry = 0; for (int i = 0; i < splitsWithFullEntries; i++) { fragmentSplitLastEntry = (firstEntryId + rereplicationEntryBatchSize) - 1; fragments.add(new LedgerFragment(lh, firstEntryId, fragmentSplitLastEntry, ledgerFragment.getBookiesIndex())); firstEntryId = fragmentSplitLastEntry + 1; } long lastSplitWithPartialEntries = numberOfEntriesToReplicate % rereplicationEntryBatchSize; if (lastSplitWithPartialEntries > 0) { fragments.add(new LedgerFragment(lh, firstEntryId, firstEntryId + lastSplitWithPartialEntries - 1, ledgerFragment .getBookiesIndex())); } return fragments; } /** * This method asynchronously recovers a specific ledger entry by reading * the values via the BookKeeper Client (which would read it from the other * replicas) and then writing it to the chosen new bookie. * * @param entryId * Ledger Entry ID to recover. * @param lh * LedgerHandle for the ledger * @param ledgerFragmentEntryMcb * MultiCallback to invoke once we've recovered the current * ledger entry. * @param newBookie * New bookie we want to use to recover and replicate the ledger * entries that were stored on the failed bookie. */ private void recoverLedgerFragmentEntry(final Long entryId, final LedgerHandle lh, final AsyncCallback.VoidCallback ledgerFragmentEntryMcb, final InetSocketAddress newBookie) throws InterruptedException { /* * Read the ledger entry using the LedgerHandle. This will allow us to * read the entry from one of the other replicated bookies other than * the dead one. */ lh.asyncReadEntries(entryId, entryId, new ReadCallback() { @Override public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { if (rc != Code.OK.intValue()) { LOG.error("BK error reading ledger entry: " + entryId, BKException.create(rc)); ledgerFragmentEntryMcb.processResult(rc, null, null); return; } /* * Now that we've read the ledger entry, write it to the new * bookie we've selected. */ LedgerEntry entry = seq.nextElement(); byte[] data = entry.getEntry(); ChannelBuffer toSend = lh.getDigestManager() .computeDigestAndPackageForSending(entryId, lh.getLastAddConfirmed(), entry.getLength(), data, 0, data.length); bkc.getBookieClient().addEntry(newBookie, lh.getId(), lh.getLedgerKey(), entryId, toSend, new WriteCallback() { @Override public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { if (rc != Code.OK.intValue()) { LOG.error( "BK error writing entry for ledgerId: " + ledgerId + ", entryId: " + entryId + ", bookie: " + addr, BKException .create(rc)); } else { if (LOG.isDebugEnabled()) { LOG.debug("Success writing ledger id " + ledgerId + ", entry id " + entryId + " to a new bookie " + addr + "!"); } } /* * Pass the return code result up the chain with * the parent callback. */ ledgerFragmentEntryMcb.processResult(rc, null, null); } }, null, BookieProtocol.FLAG_RECOVERY_ADD); } }, null); } /** * Callback for recovery of a single ledger fragment. Once the fragment has * had all entries replicated, update the ensemble in zookeeper. Once * finished propogate callback up to ledgerFragmentsMcb which should be a * multicallback responsible for all fragments in a single ledger */ static class SingleFragmentCallback implements AsyncCallback.VoidCallback { final AsyncCallback.VoidCallback ledgerFragmentsMcb; final LedgerHandle lh; final long fragmentStartId; final InetSocketAddress oldBookie; final InetSocketAddress newBookie; SingleFragmentCallback(AsyncCallback.VoidCallback ledgerFragmentsMcb, LedgerHandle lh, long fragmentStartId, InetSocketAddress oldBookie, InetSocketAddress newBookie) { this.ledgerFragmentsMcb = ledgerFragmentsMcb; this.lh = lh; this.fragmentStartId = fragmentStartId; this.newBookie = newBookie; this.oldBookie = oldBookie; } @Override public void processResult(int rc, String path, Object ctx) { if (rc != Code.OK.intValue()) { LOG.error("BK error replicating ledger fragments for ledger: " + lh.getId(), BKException.create(rc)); ledgerFragmentsMcb.processResult(rc, null, null); return; } updateEnsembleInfo(ledgerFragmentsMcb, fragmentStartId, lh, oldBookie, newBookie); } } /** Updates the ensemble with newBookie and notify the ensembleUpdatedCb */ private static void updateEnsembleInfo( AsyncCallback.VoidCallback ensembleUpdatedCb, long fragmentStartId, LedgerHandle lh, InetSocketAddress oldBookie, InetSocketAddress newBookie) { /* * Update the ledger metadata's ensemble info to point to the new * bookie. */ ArrayList ensemble = lh.getLedgerMetadata() .getEnsembles().get(fragmentStartId); int deadBookieIndex = ensemble.indexOf(oldBookie); ensemble.remove(deadBookieIndex); ensemble.add(deadBookieIndex, newBookie); lh.writeLedgerConfig(new UpdateEnsembleCb(ensembleUpdatedCb, fragmentStartId, lh, oldBookie, newBookie)); } /** * Update the ensemble data with newBookie. re-reads the metadata on * MetadataVersionException and update ensemble again. On successfull * updation, it will also notify to super call back */ private static class UpdateEnsembleCb implements GenericCallback { final AsyncCallback.VoidCallback ensembleUpdatedCb; final LedgerHandle lh; final long fragmentStartId; final InetSocketAddress oldBookie; final InetSocketAddress newBookie; public UpdateEnsembleCb(AsyncCallback.VoidCallback ledgerFragmentsMcb, long fragmentStartId, LedgerHandle lh, InetSocketAddress oldBookie, InetSocketAddress newBookie) { this.ensembleUpdatedCb = ledgerFragmentsMcb; this.lh = lh; this.fragmentStartId = fragmentStartId; this.newBookie = newBookie; this.oldBookie = oldBookie; } @Override public void operationComplete(int rc, Void result) { if (rc == BKException.Code.MetadataVersionException) { LOG.warn("Two fragments attempted update at once; ledger id: " + lh.getId() + " startid: " + fragmentStartId); // try again, the previous success (with which this has // conflicted) will have updated the stat other operations // such as (addEnsemble) would update it too. lh .rereadMetadata(new OrderedSafeGenericCallback( lh.bk.mainWorkerPool, lh.getId()) { @Override public void safeOperationComplete(int rc, LedgerMetadata newMeta) { if (rc != BKException.Code.OK) { LOG .error("Error reading updated ledger metadata for ledger " + lh.getId()); ensembleUpdatedCb.processResult(rc, null, null); } else { lh.metadata = newMeta; updateEnsembleInfo(ensembleUpdatedCb, fragmentStartId, lh, oldBookie, newBookie); } } }); return; } else if (rc != BKException.Code.OK) { LOG.error("Error updating ledger config metadata for ledgerId " + lh.getId() + " : " + BKException.getMessage(rc)); } else { LOG.info("Updated ZK for ledgerId: (" + lh.getId() + " : " + fragmentStartId + ") to point ledger fragments from old dead bookie: (" + oldBookie + ") to new bookie: (" + newBookie + ")"); } /* * Pass the return code result up the chain with the parent * callback. */ ensembleUpdatedCb.processResult(rc, null, null); } } } LedgerHandle.java000066400000000000000000001255001244507361200341000ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.net.InetSocketAddress; import java.security.GeneralSecurityException; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.Arrays; import java.util.ArrayList; import java.util.Enumeration; import java.util.List; import java.util.Queue; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.RejectedExecutionException; import org.apache.bookkeeper.client.AsyncCallback.ReadLastConfirmedCallback; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.AsyncCallback.CloseCallback; import org.apache.bookkeeper.client.AsyncCallback.ReadCallback; import org.apache.bookkeeper.client.BKException.BKNotEnoughBookiesException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.util.OrderedSafeExecutor.OrderedSafeGenericCallback; import org.apache.bookkeeper.proto.BookieProtocol; import org.apache.bookkeeper.proto.DataFormats.LedgerMetadataFormat.State; import org.apache.bookkeeper.util.SafeRunnable; import com.google.common.util.concurrent.RateLimiter; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.buffer.ChannelBuffer; /** * Ledger handle contains ledger metadata and is used to access the read and * write operations to a ledger. */ public class LedgerHandle { final static Logger LOG = LoggerFactory.getLogger(LedgerHandle.class); final byte[] ledgerKey; LedgerMetadata metadata; final BookKeeper bk; final long ledgerId; long lastAddPushed; long lastAddConfirmed; long length; final DigestManager macManager; final DistributionSchedule distributionSchedule; final RateLimiter throttler; /** * Invalid entry id. This value is returned from methods which * should return an entry id but there is no valid entry available. */ final static public long INVALID_ENTRY_ID = BookieProtocol.INVALID_ENTRY_ID; final AtomicInteger blockAddCompletions = new AtomicInteger(0); final Queue pendingAddOps = new ConcurrentLinkedQueue(); LedgerHandle(BookKeeper bk, long ledgerId, LedgerMetadata metadata, DigestType digestType, byte[] password) throws GeneralSecurityException, NumberFormatException { this.bk = bk; this.metadata = metadata; if (metadata.isClosed()) { lastAddConfirmed = lastAddPushed = metadata.getLastEntryId(); length = metadata.getLength(); } else { lastAddConfirmed = lastAddPushed = INVALID_ENTRY_ID; length = 0; } this.ledgerId = ledgerId; this.throttler = RateLimiter.create(bk.getConf().getThrottleValue()); macManager = DigestManager.instantiate(ledgerId, password, digestType); this.ledgerKey = MacDigestManager.genDigest("ledger", password); distributionSchedule = new RoundRobinDistributionSchedule( metadata.getWriteQuorumSize(), metadata.getAckQuorumSize(), metadata.getEnsembleSize()); } /** * Get the id of the current ledger * * @return the id of the ledger */ public long getId() { return ledgerId; } /** * Get the last confirmed entry id on this ledger. It reads * the local state of the ledger handle, which is different * from the readLastConfirmed call. In the case the ledger * is not closed and the client is a reader, it is necessary * to call readLastConfirmed to obtain an estimate of the * last add operation that has been confirmed. * * @see #readLastConfirmed() * * @return the last confirmed entry id or {@link #INVALID_ENTRY_ID INVALID_ENTRY_ID} if no entry has been confirmed */ public long getLastAddConfirmed() { return lastAddConfirmed; } /** * Get the entry id of the last entry that has been enqueued for addition (but * may not have possibly been persited to the ledger) * * @return the id of the last entry pushed or {@link #INVALID_ENTRY_ID INVALID_ENTRY_ID} if no entry has been pushed */ synchronized public long getLastAddPushed() { return lastAddPushed; } /** * Get the Ledger's key/password. * * @return byte array for the ledger's key/password. */ public byte[] getLedgerKey() { return Arrays.copyOf(ledgerKey, ledgerKey.length); } /** * Get the LedgerMetadata * * @return LedgerMetadata for the LedgerHandle */ LedgerMetadata getLedgerMetadata() { return metadata; } /** * Get the DigestManager * * @return DigestManager for the LedgerHandle */ DigestManager getDigestManager() { return macManager; } /** * Add to the length of the ledger in bytes. * * @param delta * @return */ long addToLength(long delta) { this.length += delta; return this.length; } /** * Returns the length of the ledger in bytes. * * @return the length of the ledger in bytes */ synchronized public long getLength() { return this.length; } /** * Get the Distribution Schedule * * @return DistributionSchedule for the LedgerHandle */ DistributionSchedule getDistributionSchedule() { return distributionSchedule; } void writeLedgerConfig(GenericCallback writeCb) { LOG.debug("Writing metadata to ledger manager: {}, {}", this.ledgerId, metadata.getVersion()); bk.getLedgerManager().writeLedgerMetadata(ledgerId, metadata, writeCb); } /** * Close this ledger synchronously. * @see #asyncClose */ public void close() throws InterruptedException, BKException { SyncCounter counter = new SyncCounter(); counter.inc(); asyncClose(new SyncCloseCallback(), counter); counter.block(0); if (counter.getrc() != BKException.Code.OK) { throw BKException.create(counter.getrc()); } } /** * Asynchronous close, any adds in flight will return errors. * * Closing a ledger will ensure that all clients agree on what the last entry * of the ledger is. This ensures that, once the ledger has been closed, all * reads from the ledger will return the same set of entries. * * @param cb * callback implementation * @param ctx * control object * @throws InterruptedException */ public void asyncClose(CloseCallback cb, Object ctx) { asyncCloseInternal(cb, ctx, BKException.Code.LedgerClosedException); } /** * Has the ledger been closed? */ public synchronized boolean isClosed() { return metadata.isClosed(); } /** * Same as public version of asyncClose except that this one takes an * additional parameter which is the return code to hand to all the pending * add ops * * @param cb * @param ctx * @param rc */ void asyncCloseInternal(final CloseCallback cb, final Object ctx, final int rc) { bk.mainWorkerPool.submitOrdered(ledgerId, new SafeRunnable() { @Override public void safeRun() { final long prevLastEntryId; final long prevLength; final State prevState; List pendingAdds; if (isClosed()) { // TODO: make ledger metadata immutable // Although the metadata is already closed, we don't need to proceed zookeeper metadata update, but // we still need to error out the pending add ops. // // There is a race condition a pending add op is enqueued, after a close op reset ledger metadata state // to unclosed to resolve metadata conflicts. If we don't error out these pending add ops, they would be // leak and never callback. // // The race condition happen in following sequence: // a) ledger L is fenced // b) write entry E encountered LedgerFencedException, trigger ledger close procedure // c) ledger close encountered metadata version exception and set ledger metadata back to open // d) writer tries to write entry E+1, since ledger metadata is still open (reset by c)) // e) the close procedure in c) resolved the metadata conflicts and set ledger metadata to closed // f) writing entry E+1 encountered LedgerFencedException which will enter ledger close procedure // g) it would find that ledger metadata is closed, then it callbacks immediately without erroring out any pendings synchronized (LedgerHandle.this) { pendingAdds = drainPendingAddsToErrorOut(); } errorOutPendingAdds(rc, pendingAdds); cb.closeComplete(BKException.Code.OK, LedgerHandle.this, ctx); return; } synchronized(LedgerHandle.this) { prevState = metadata.getState(); prevLastEntryId = metadata.getLastEntryId(); prevLength = metadata.getLength(); // drain pending adds first pendingAdds = drainPendingAddsToErrorOut(); // synchronized on LedgerHandle.this to ensure that // lastAddPushed can not be updated after the metadata // is closed. metadata.setLength(length); metadata.close(lastAddConfirmed); lastAddPushed = lastAddConfirmed; } // error out all pending adds during closing, the callbacks shouldn't be // running under any bk locks. errorOutPendingAdds(rc, pendingAdds); if (LOG.isDebugEnabled()) { LOG.debug("Closing ledger: " + ledgerId + " at entryId: " + metadata.getLastEntryId() + " with this many bytes: " + metadata.getLength()); } final class CloseCb extends OrderedSafeGenericCallback { CloseCb() { super(bk.mainWorkerPool, ledgerId); } @Override public void safeOperationComplete(final int rc, Void result) { if (rc == BKException.Code.MetadataVersionException) { rereadMetadata(new OrderedSafeGenericCallback(bk.mainWorkerPool, ledgerId) { @Override public void safeOperationComplete(int newrc, LedgerMetadata newMeta) { if (newrc != BKException.Code.OK) { LOG.error("Error reading new metadata from ledger " + ledgerId + " when closing, code=" + newrc); cb.closeComplete(rc, LedgerHandle.this, ctx); } else { metadata.setState(prevState); if (prevState.equals(State.CLOSED)) { metadata.close(prevLastEntryId); } metadata.setLength(prevLength); if (!metadata.isNewerThan(newMeta) && !metadata.isConflictWith(newMeta)) { // use the new metadata's ensemble, in case re-replication already // replaced some bookies in the ensemble. metadata.setEnsembles(newMeta.getEnsembles()); metadata.setVersion(newMeta.version); metadata.setLength(length); metadata.close(lastAddConfirmed); writeLedgerConfig(new CloseCb()); return; } else { metadata.setLength(length); metadata.close(lastAddConfirmed); LOG.warn("Conditional update ledger metadata for ledger " + ledgerId + " failed."); cb.closeComplete(rc, LedgerHandle.this, ctx); } } } }); } else if (rc != BKException.Code.OK) { LOG.error("Error update ledger metadata for ledger " + ledgerId + " : " + rc); cb.closeComplete(rc, LedgerHandle.this, ctx); } else { cb.closeComplete(BKException.Code.OK, LedgerHandle.this, ctx); } } }; writeLedgerConfig(new CloseCb()); } }); } /** * Read a sequence of entries synchronously. * * @param firstEntry * id of first entry of sequence (included) * @param lastEntry * id of last entry of sequence (included) * */ public Enumeration readEntries(long firstEntry, long lastEntry) throws InterruptedException, BKException { SyncCounter counter = new SyncCounter(); counter.inc(); asyncReadEntries(firstEntry, lastEntry, new SyncReadCallback(), counter); counter.block(0); if (counter.getrc() != BKException.Code.OK) { throw BKException.create(counter.getrc()); } return counter.getSequence(); } /** * Read a sequence of entries asynchronously. * * @param firstEntry * id of first entry of sequence * @param lastEntry * id of last entry of sequence * @param cb * object implementing read callback interface * @param ctx * control object */ public void asyncReadEntries(long firstEntry, long lastEntry, ReadCallback cb, Object ctx) { // Little sanity check if (firstEntry < 0 || lastEntry > lastAddConfirmed || firstEntry > lastEntry) { cb.readComplete(BKException.Code.ReadException, this, null, ctx); return; } try { new PendingReadOp(this, bk.scheduler, firstEntry, lastEntry, cb, ctx).initiate(); } catch (InterruptedException e) { cb.readComplete(BKException.Code.InterruptedException, this, null, ctx); } } /** * Add entry synchronously to an open ledger. * * @param data * array of bytes to be written to the ledger * @return the entryId of the new inserted entry */ public long addEntry(byte[] data) throws InterruptedException, BKException { return addEntry(data, 0, data.length); } /** * Add entry synchronously to an open ledger. * * @param data * array of bytes to be written to the ledger * @param offset * offset from which to take bytes from data * @param length * number of bytes to take from data * @return the entryId of the new inserted entry */ public long addEntry(byte[] data, int offset, int length) throws InterruptedException, BKException { LOG.debug("Adding entry {}", data); SyncCounter counter = new SyncCounter(); counter.inc(); SyncAddCallback callback = new SyncAddCallback(); asyncAddEntry(data, offset, length, callback, counter); counter.block(0); if (counter.getrc() != BKException.Code.OK) { throw BKException.create(counter.getrc()); } return callback.entryId; } /** * Add entry asynchronously to an open ledger. * * @param data * array of bytes to be written * @param cb * object implementing callbackinterface * @param ctx * some control object */ public void asyncAddEntry(final byte[] data, final AddCallback cb, final Object ctx) { asyncAddEntry(data, 0, data.length, cb, ctx); } /** * Add entry asynchronously to an open ledger, using an offset and range. * * @param data * array of bytes to be written * @param offset * offset from which to take bytes from data * @param length * number of bytes to take from data * @param cb * object implementing callbackinterface * @param ctx * some control object * @throws ArrayIndexOutOfBoundsException if offset or length is negative or * offset and length sum to a value higher than the length of data. */ public void asyncAddEntry(final byte[] data, final int offset, final int length, final AddCallback cb, final Object ctx) { PendingAddOp op = new PendingAddOp(LedgerHandle.this, cb, ctx); doAsyncAddEntry(op, data, offset, length, cb, ctx); } /** * Make a recovery add entry request. Recovery adds can add to a ledger even if * it has been fenced. * * This is only valid for bookie and ledger recovery, which may need to replicate * entries to a quorum of bookies to ensure data safety. * * Normal client should never call this method. */ void asyncRecoveryAddEntry(final byte[] data, final int offset, final int length, final AddCallback cb, final Object ctx) { PendingAddOp op = new PendingAddOp(LedgerHandle.this, cb, ctx).enableRecoveryAdd(); doAsyncAddEntry(op, data, offset, length, cb, ctx); } private void doAsyncAddEntry(final PendingAddOp op, final byte[] data, final int offset, final int length, final AddCallback cb, final Object ctx) { if (offset < 0 || length < 0 || (offset + length) > data.length) { throw new ArrayIndexOutOfBoundsException( "Invalid values for offset("+offset +") or length("+length+")"); } throttler.acquire(); final long entryId; final long currentLength; boolean wasClosed = false; synchronized(this) { // synchronized on this to ensure that // the ledger isn't closed between checking and // updating lastAddPushed if (metadata.isClosed()) { wasClosed = true; entryId = -1; currentLength = 0; } else { entryId = ++lastAddPushed; currentLength = addToLength(length); op.setEntryId(entryId); pendingAddOps.add(op); } } if (wasClosed) { // make sure the callback is triggered in main worker pool try { bk.mainWorkerPool.submit(new SafeRunnable() { @Override public void safeRun() { LOG.warn("Attempt to add to closed ledger: {}", ledgerId); cb.addComplete(BKException.Code.LedgerClosedException, LedgerHandle.this, INVALID_ENTRY_ID, ctx); } @Override public String toString() { return String.format("AsyncAddEntryToClosedLedger(lid=%d)", ledgerId); } }); } catch (RejectedExecutionException e) { cb.addComplete(BKException.Code.InterruptedException, LedgerHandle.this, INVALID_ENTRY_ID, ctx); } return; } try { bk.mainWorkerPool.submit(new SafeRunnable() { @Override public void safeRun() { ChannelBuffer toSend = macManager.computeDigestAndPackageForSending( entryId, lastAddConfirmed, currentLength, data, offset, length); op.initiate(toSend, length); } }); } catch (RuntimeException e) { cb.addComplete(BKException.Code.InterruptedException, LedgerHandle.this, INVALID_ENTRY_ID, ctx); } } /** * Obtains asynchronously the last confirmed write from a quorum of bookies. This * call obtains the the last add confirmed each bookie has received for this ledger * and returns the maximum. If the ledger has been closed, the value returned by this * call may not correspond to the id of the last entry of the ledger, since it reads * the hint of bookies. Consequently, in the case the ledger has been closed, it may * return a different value than getLastAddConfirmed, which returns the local value * of the ledger handle. * * @see #getLastAddConfirmed() * * @param cb * @param ctx */ public void asyncReadLastConfirmed(final ReadLastConfirmedCallback cb, final Object ctx) { boolean isClosed; long lastEntryId; synchronized (this) { isClosed = metadata.isClosed(); lastEntryId = metadata.getLastEntryId(); } if (isClosed) { cb.readLastConfirmedComplete(BKException.Code.OK, lastEntryId, ctx); return; } ReadLastConfirmedOp.LastConfirmedDataCallback innercb = new ReadLastConfirmedOp.LastConfirmedDataCallback() { @Override public void readLastConfirmedDataComplete(int rc, DigestManager.RecoveryData data) { if (rc == BKException.Code.OK) { lastAddConfirmed = Math.max(lastAddConfirmed, data.lastAddConfirmed); lastAddPushed = Math.max(lastAddPushed, data.lastAddConfirmed); length = Math.max(length, data.length); cb.readLastConfirmedComplete(rc, data.lastAddConfirmed, ctx); } else { cb.readLastConfirmedComplete(rc, INVALID_ENTRY_ID, ctx); } } }; new ReadLastConfirmedOp(this, innercb).initiate(); } /** * Context objects for synchronous call to read last confirmed. */ static class LastConfirmedCtx { final static long ENTRY_ID_PENDING = -10; long response; int rc; LastConfirmedCtx() { this.response = ENTRY_ID_PENDING; } void setLastConfirmed(long lastConfirmed) { this.response = lastConfirmed; } long getlastConfirmed() { return this.response; } void setRC(int rc) { this.rc = rc; } int getRC() { return this.rc; } boolean ready() { return (this.response != ENTRY_ID_PENDING); } } /** * Obtains synchronously the last confirmed write from a quorum of bookies. This call * obtains the the last add confirmed each bookie has received for this ledger * and returns the maximum. If the ledger has been closed, the value returned by this * call may not correspond to the id of the last entry of the ledger, since it reads * the hint of bookies. Consequently, in the case the ledger has been closed, it may * return a different value than getLastAddConfirmed, which returns the local value * of the ledger handle. * * @see #getLastAddConfirmed() * * @return The entry id of the last confirmed write or {@link #INVALID_ENTRY_ID INVALID_ENTRY_ID} * if no entry has been confirmed * @throws InterruptedException * @throws BKException */ public long readLastConfirmed() throws InterruptedException, BKException { LastConfirmedCtx ctx = new LastConfirmedCtx(); asyncReadLastConfirmed(new SyncReadLastConfirmedCallback(), ctx); synchronized(ctx) { while(!ctx.ready()) { ctx.wait(); } } if(ctx.getRC() != BKException.Code.OK) throw BKException.create(ctx.getRC()); return ctx.getlastConfirmed(); } // close the ledger and send fails to all the adds in the pipeline void handleUnrecoverableErrorDuringAdd(int rc) { if (metadata.isInRecovery()) { // we should not close ledger if ledger is recovery mode // otherwise we may lose entry. errorOutPendingAdds(rc); return; } LOG.error("Closing ledger {} due to error {}", ledgerId, rc); asyncCloseInternal(NoopCloseCallback.instance, null, rc); } void errorOutPendingAdds(int rc) { errorOutPendingAdds(rc, drainPendingAddsToErrorOut()); } synchronized List drainPendingAddsToErrorOut() { PendingAddOp pendingAddOp; List opsDrained = new ArrayList(pendingAddOps.size()); while ((pendingAddOp = pendingAddOps.poll()) != null) { addToLength(-pendingAddOp.entryLength); opsDrained.add(pendingAddOp); } return opsDrained; } void errorOutPendingAdds(int rc, List ops) { for (PendingAddOp op : ops) { op.submitCallback(rc); } } void sendAddSuccessCallbacks() { // Start from the head of the queue and proceed while there are // entries that have had all their responses come back PendingAddOp pendingAddOp; while ((pendingAddOp = pendingAddOps.peek()) != null && blockAddCompletions.get() == 0) { if (!pendingAddOp.completed) { return; } pendingAddOps.remove(); lastAddConfirmed = pendingAddOp.entryId; pendingAddOp.submitCallback(BKException.Code.OK); } } ArrayList replaceBookieInMetadata(final InetSocketAddress addr, final int bookieIndex) throws BKException.BKNotEnoughBookiesException { InetSocketAddress newBookie; LOG.info("Handling failure of bookie: {} index: {}", addr, bookieIndex); final ArrayList newEnsemble = new ArrayList(); final long newEnsembleStartEntry = lastAddConfirmed + 1; // avoid parallel ensemble changes to same ensemble. synchronized (metadata) { newBookie = bk.bookieWatcher.getAdditionalBookie(metadata.currentEnsemble); newEnsemble.addAll(metadata.currentEnsemble); newEnsemble.set(bookieIndex, newBookie); if (LOG.isDebugEnabled()) { LOG.debug("Changing ensemble from: " + metadata.currentEnsemble + " to: " + newEnsemble + " for ledger: " + ledgerId + " starting at entry: " + (lastAddConfirmed + 1)); } metadata.addEnsemble(newEnsembleStartEntry, newEnsemble); } return newEnsemble; } void handleBookieFailure(final InetSocketAddress addr, final int bookieIndex) { blockAddCompletions.incrementAndGet(); synchronized (metadata) { if (!metadata.currentEnsemble.get(bookieIndex).equals(addr)) { // ensemble has already changed, failure of this addr is immaterial LOG.warn("Write did not succeed to {}, bookieIndex {}, but we have already fixed it.", addr, bookieIndex); blockAddCompletions.decrementAndGet(); return; } try { ArrayList newEnsemble = replaceBookieInMetadata(addr, bookieIndex); EnsembleInfo ensembleInfo = new EnsembleInfo(newEnsemble, bookieIndex, addr); writeLedgerConfig(new ChangeEnsembleCb(ensembleInfo)); } catch (BKException.BKNotEnoughBookiesException e) { LOG.error("Could not get additional bookie to " + "remake ensemble, closing ledger: " + ledgerId); handleUnrecoverableErrorDuringAdd(e.getCode()); return; } } } // Contains newly reformed ensemble, bookieIndex, failedBookieAddress private static final class EnsembleInfo { private final ArrayList newEnsemble; private final int bookieIndex; private final InetSocketAddress addr; public EnsembleInfo(ArrayList newEnsemble, int bookieIndex, InetSocketAddress addr) { this.newEnsemble = newEnsemble; this.bookieIndex = bookieIndex; this.addr = addr; } } /** * Callback which is updating the ledgerMetadata in zk with the newly * reformed ensemble. On MetadataVersionException, will reread latest * ledgerMetadata and act upon. */ private final class ChangeEnsembleCb extends OrderedSafeGenericCallback { private final EnsembleInfo ensembleInfo; ChangeEnsembleCb(EnsembleInfo ensembleInfo) { super(bk.mainWorkerPool, ledgerId); this.ensembleInfo = ensembleInfo; } @Override public void safeOperationComplete(final int rc, Void result) { if (rc == BKException.Code.MetadataVersionException) { rereadMetadata(new ReReadLedgerMetadataCb(rc, ensembleInfo)); return; } else if (rc != BKException.Code.OK) { LOG.error("Could not persist ledger metadata while " + "changing ensemble to: " + ensembleInfo.newEnsemble + " , closing ledger"); handleUnrecoverableErrorDuringAdd(rc); return; } blockAddCompletions.decrementAndGet(); // the failed bookie has been replaced unsetSuccessAndSendWriteRequest(ensembleInfo.bookieIndex); } }; /** * Callback which is reading the ledgerMetadata present in zk. This will try * to resolve the version conflicts. */ private final class ReReadLedgerMetadataCb extends OrderedSafeGenericCallback { private final int rc; private final EnsembleInfo ensembleInfo; ReReadLedgerMetadataCb(int rc, EnsembleInfo ensembleInfo) { super(bk.mainWorkerPool, ledgerId); this.rc = rc; this.ensembleInfo = ensembleInfo; } @Override public void safeOperationComplete(int newrc, LedgerMetadata newMeta) { if (newrc != BKException.Code.OK) { LOG.error("Error reading new metadata from ledger " + "after changing ensemble, code=" + newrc); handleUnrecoverableErrorDuringAdd(rc); } else { if (!resolveConflict(newMeta)) { LOG.error("Could not resolve ledger metadata conflict " + "while changing ensemble to: " + ensembleInfo.newEnsemble + ", old meta data is \n" + new String(metadata.serialize()) + "\n, new meta data is \n" + new String(newMeta.serialize()) + "\n ,closing ledger"); handleUnrecoverableErrorDuringAdd(rc); } } } /** * Specific resolve conflicts happened when multiple bookies failures in same ensemble. *

* Resolving the version conflicts between local ledgerMetadata and zk * ledgerMetadata. This will do the following: *

    *
  • * check whether ledgerMetadata state matches of local and zk
  • *
  • * if the zk ledgerMetadata still contains the failed bookie, then * update zookeeper with the newBookie otherwise send write request
  • *
*

*/ private boolean resolveConflict(LedgerMetadata newMeta) { // make sure the ledger isn't closed by other ones. if (metadata.getState() != newMeta.getState()) { return false; } // We should check number of ensembles since there are two kinds of metadata conflicts: // - Case 1: Multiple bookies involved in ensemble change. // Number of ensembles should be same in this case. // - Case 2: Recovery (Auto/Manually) replaced ensemble and ensemble changed. // The metadata changed due to ensemble change would have one more ensemble // than the metadata changed by recovery. int diff = newMeta.getEnsembles().size() - metadata.getEnsembles().size(); if (0 != diff) { if (-1 == diff) { // Case 1: metadata is changed by other ones (e.g. Recovery) return updateMetadataIfPossible(newMeta); } return false; } // // Case 2: // // If the failed the bookie is still existed in the metadata (in zookeeper), it means that // the ensemble change of the failed bookie is failed due to metadata conflicts. so try to // update the ensemble change metadata again. Otherwise, it means that the ensemble change // is already succeed, unset the success and re-adding entries. if (newMeta.currentEnsemble.get(ensembleInfo.bookieIndex).equals( ensembleInfo.addr)) { // If the in-memory data doesn't contains the failed bookie, it means the ensemble change // didn't finish, so try to resolve conflicts with the metadata read from zookeeper and // update ensemble changed metadata again. if (!metadata.currentEnsemble.get(ensembleInfo.bookieIndex) .equals(ensembleInfo.addr)) { return updateMetadataIfPossible(newMeta); } } else { // the failed bookie has been replaced blockAddCompletions.decrementAndGet(); unsetSuccessAndSendWriteRequest(ensembleInfo.bookieIndex); } return true; } private boolean updateMetadataIfPossible(LedgerMetadata newMeta) { // if the local metadata is newer than zookeeper metadata, it means that metadata is updated // again when it was trying re-reading the metatada, re-kick the reread again if (metadata.isNewerThan(newMeta)) { rereadMetadata(this); return true; } // make sure the metadata doesn't changed by other ones. if (metadata.isConflictWith(newMeta)) { return false; } LOG.info("Resolve ledger metadata conflict while changing ensemble to: {}," + " old meta data is \n {} \n, new meta data is \n {}.", new Object[] { ensembleInfo.newEnsemble, metadata, newMeta }); // update znode version metadata.setVersion(newMeta.getVersion()); // merge ensemble infos from new meta except last ensemble // since they might be modified by recovery tool. metadata.mergeEnsembles(newMeta.getEnsembles()); writeLedgerConfig(new ChangeEnsembleCb(ensembleInfo)); return true; } }; void unsetSuccessAndSendWriteRequest(final int bookieIndex) { for (PendingAddOp pendingAddOp : pendingAddOps) { pendingAddOp.unsetSuccessAndSendWriteRequest(bookieIndex); } } void rereadMetadata(final GenericCallback cb) { bk.getLedgerManager().readLedgerMetadata(ledgerId, cb); } void recover(final GenericCallback cb) { boolean wasClosed = false; boolean wasInRecovery = false; synchronized (this) { if (metadata.isClosed()) { lastAddConfirmed = lastAddPushed = metadata.getLastEntryId(); length = metadata.getLength(); wasClosed = true; } else { wasClosed = false; if (metadata.isInRecovery()) { wasInRecovery = true; } else { wasInRecovery = false; metadata.markLedgerInRecovery(); } } } if (wasClosed) { // We are already closed, nothing to do cb.operationComplete(BKException.Code.OK, null); return; } if (wasInRecovery) { // if metadata is already in recover, dont try to write again, // just do the recovery from the starting point new LedgerRecoveryOp(LedgerHandle.this, cb).initiate(); return; } writeLedgerConfig(new OrderedSafeGenericCallback(bk.mainWorkerPool, ledgerId) { @Override public void safeOperationComplete(final int rc, Void result) { if (rc == BKException.Code.MetadataVersionException) { rereadMetadata(new OrderedSafeGenericCallback(bk.mainWorkerPool, ledgerId) { @Override public void safeOperationComplete(int rc, LedgerMetadata newMeta) { if (rc != BKException.Code.OK) { cb.operationComplete(rc, null); } else { metadata = newMeta; recover(cb); } } }); } else if (rc == BKException.Code.OK) { new LedgerRecoveryOp(LedgerHandle.this, cb).initiate(); } else { LOG.error("Error writing ledger config " + rc + " of ledger " + ledgerId); cb.operationComplete(rc, null); } } }); } static class NoopCloseCallback implements CloseCallback { static NoopCloseCallback instance = new NoopCloseCallback(); @Override public void closeComplete(int rc, LedgerHandle lh, Object ctx) { if (rc != BKException.Code.OK) { LOG.warn("Close failed: " + BKException.getMessage(rc)); } // noop } } private static class SyncReadCallback implements ReadCallback { /** * Implementation of callback interface for synchronous read method. * * @param rc * return code * @param leder * ledger identifier * @param seq * sequence of entries * @param ctx * control object */ @Override public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { SyncCounter counter = (SyncCounter) ctx; synchronized (counter) { counter.setSequence(seq); counter.setrc(rc); counter.dec(); counter.notify(); } } } private static class SyncAddCallback implements AddCallback { long entryId = -1; /** * Implementation of callback interface for synchronous read method. * * @param rc * return code * @param leder * ledger identifier * @param entry * entry identifier * @param ctx * control object */ @Override public void addComplete(int rc, LedgerHandle lh, long entry, Object ctx) { SyncCounter counter = (SyncCounter) ctx; this.entryId = entry; counter.setrc(rc); counter.dec(); } } private static class SyncReadLastConfirmedCallback implements ReadLastConfirmedCallback { /** * Implementation of callback interface for synchronous read last confirmed method. */ @Override public void readLastConfirmedComplete(int rc, long lastConfirmed, Object ctx) { LastConfirmedCtx lcCtx = (LastConfirmedCtx) ctx; synchronized(lcCtx) { lcCtx.setRC(rc); lcCtx.setLastConfirmed(lastConfirmed); lcCtx.notify(); } } } private static class SyncCloseCallback implements CloseCallback { /** * Close callback method * * @param rc * @param lh * @param ctx */ @Override public void closeComplete(int rc, LedgerHandle lh, Object ctx) { SyncCounter counter = (SyncCounter) ctx; counter.setrc(rc); synchronized (counter) { counter.dec(); counter.notify(); } } } } LedgerMetadata.java000066400000000000000000000447641244507361200344410ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.client; import static com.google.common.base.Charsets.UTF_8; import java.io.BufferedReader; import java.io.StringReader; import java.io.IOException; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Iterator; import java.util.Map; import java.util.Map.Entry; import java.util.SortedMap; import java.util.TreeMap; import java.util.Arrays; import org.apache.bookkeeper.versioning.Version; import com.google.protobuf.TextFormat; import com.google.protobuf.ByteString; import org.apache.bookkeeper.proto.DataFormats.LedgerMetadataFormat; import org.apache.bookkeeper.util.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This class encapsulates all the ledger metadata that is persistently stored * in zookeeper. It provides parsing and serialization methods of such metadata. * */ public class LedgerMetadata { static final Logger LOG = LoggerFactory.getLogger(LedgerMetadata.class); private static final String closed = "CLOSED"; private static final String lSplitter = "\n"; private static final String tSplitter = "\t"; // can't use -1 for NOTCLOSED because that is reserved for a closed, empty // ledger private static final int NOTCLOSED = -101; private static final int IN_RECOVERY = -102; public static final int LOWEST_COMPAT_METADATA_FORMAT_VERSION = 0; public static final int CURRENT_METADATA_FORMAT_VERSION = 2; public static final String VERSION_KEY = "BookieMetadataFormatVersion"; private int metadataFormatVersion = 0; private int ensembleSize; private int writeQuorumSize; private int ackQuorumSize; private long length; private long lastEntryId; private LedgerMetadataFormat.State state; private SortedMap> ensembles = new TreeMap>(); ArrayList currentEnsemble; volatile Version version = Version.NEW; private boolean hasPassword = false; private LedgerMetadataFormat.DigestType digestType; private byte[] password; public LedgerMetadata(int ensembleSize, int writeQuorumSize, int ackQuorumSize, BookKeeper.DigestType digestType, byte[] password) { this.ensembleSize = ensembleSize; this.writeQuorumSize = writeQuorumSize; this.ackQuorumSize = ackQuorumSize; /* * It is set in PendingReadOp.readEntryComplete, and * we read it in LedgerRecoveryOp.readComplete. */ this.length = 0; this.state = LedgerMetadataFormat.State.OPEN; this.lastEntryId = LedgerHandle.INVALID_ENTRY_ID; this.metadataFormatVersion = CURRENT_METADATA_FORMAT_VERSION; this.digestType = digestType.equals(BookKeeper.DigestType.MAC) ? LedgerMetadataFormat.DigestType.HMAC : LedgerMetadataFormat.DigestType.CRC32; this.password = Arrays.copyOf(password, password.length); this.hasPassword = true; } /** * Copy Constructor. */ LedgerMetadata(LedgerMetadata other) { this.ensembleSize = other.ensembleSize; this.writeQuorumSize = other.writeQuorumSize; this.ackQuorumSize = other.ackQuorumSize; this.length = other.length; this.lastEntryId = other.lastEntryId; this.metadataFormatVersion = other.metadataFormatVersion; this.state = other.state; this.version = other.version; this.hasPassword = other.hasPassword; this.digestType = other.digestType; this.password = new byte[other.password.length]; System.arraycopy(other.password, 0, this.password, 0, other.password.length); // copy the ensembles for (Entry> entry : other.ensembles.entrySet()) { long startEntryId = entry.getKey(); ArrayList newEnsemble = new ArrayList(entry.getValue()); this.addEnsemble(startEntryId, newEnsemble); } } private LedgerMetadata() { this(0, 0, 0, BookKeeper.DigestType.MAC, new byte[] {}); this.hasPassword = false; } /** * Get the Map of bookie ensembles for the various ledger fragments * that make up the ledger. * * @return SortedMap of Ledger Fragments and the corresponding * bookie ensembles that store the entries. */ public SortedMap> getEnsembles() { return ensembles; } void setEnsembles(SortedMap> ensembles) { this.ensembles = ensembles; } public int getEnsembleSize() { return ensembleSize; } public int getWriteQuorumSize() { return writeQuorumSize; } public int getAckQuorumSize() { return ackQuorumSize; } /** * In versions 4.1.0 and below, the digest type and password were not * stored in the metadata. * * @return whether the password has been stored in the metadata */ boolean hasPassword() { return hasPassword; } byte[] getPassword() { return Arrays.copyOf(password, password.length); } BookKeeper.DigestType getDigestType() { if (digestType.equals(LedgerMetadataFormat.DigestType.HMAC)) { return BookKeeper.DigestType.MAC; } else { return BookKeeper.DigestType.CRC32; } } public long getLastEntryId() { return lastEntryId; } public long getLength() { return length; } void setLength(long length) { this.length = length; } public boolean isClosed() { return state == LedgerMetadataFormat.State.CLOSED; } public boolean isInRecovery() { return state == LedgerMetadataFormat.State.IN_RECOVERY; } LedgerMetadataFormat.State getState() { return state; } void setState(LedgerMetadataFormat.State state) { this.state = state; } void markLedgerInRecovery() { state = LedgerMetadataFormat.State.IN_RECOVERY; } void close(long entryId) { lastEntryId = entryId; state = LedgerMetadataFormat.State.CLOSED; } void addEnsemble(long startEntryId, ArrayList ensemble) { assert ensembles.isEmpty() || startEntryId >= ensembles.lastKey(); ensembles.put(startEntryId, ensemble); currentEnsemble = ensemble; } ArrayList getEnsemble(long entryId) { // the head map cannot be empty, since we insert an ensemble for // entry-id 0, right when we start return ensembles.get(ensembles.headMap(entryId + 1).lastKey()); } /** * the entry id > the given entry-id at which the next ensemble change takes * place ( -1 if no further ensemble changes) * * @param entryId * @return */ long getNextEnsembleChange(long entryId) { SortedMap> tailMap = ensembles.tailMap(entryId + 1); if (tailMap.isEmpty()) { return -1; } else { return tailMap.firstKey(); } } /** * Generates a byte array of this object * * @return the metadata serialized into a byte array */ public byte[] serialize() { if (metadataFormatVersion == 1) { return serializeVersion1(); } LedgerMetadataFormat.Builder builder = LedgerMetadataFormat.newBuilder(); builder.setQuorumSize(writeQuorumSize).setAckQuorumSize(ackQuorumSize) .setEnsembleSize(ensembleSize).setLength(length) .setState(state).setLastEntryId(lastEntryId); if (hasPassword) { builder.setDigestType(digestType).setPassword(ByteString.copyFrom(password)); } for (Map.Entry> entry : ensembles.entrySet()) { LedgerMetadataFormat.Segment.Builder segmentBuilder = LedgerMetadataFormat.Segment.newBuilder(); segmentBuilder.setFirstEntryId(entry.getKey()); for (InetSocketAddress addr : entry.getValue()) { segmentBuilder.addEnsembleMember(StringUtils.addrToString(addr)); } builder.addSegment(segmentBuilder.build()); } StringBuilder s = new StringBuilder(); s.append(VERSION_KEY).append(tSplitter).append(CURRENT_METADATA_FORMAT_VERSION).append(lSplitter); s.append(TextFormat.printToString(builder.build())); LOG.debug("Serialized config: {}", s); return s.toString().getBytes(); } private byte[] serializeVersion1() { StringBuilder s = new StringBuilder(); s.append(VERSION_KEY).append(tSplitter).append(metadataFormatVersion).append(lSplitter); s.append(writeQuorumSize).append(lSplitter).append(ensembleSize).append(lSplitter).append(length); for (Map.Entry> entry : ensembles.entrySet()) { s.append(lSplitter).append(entry.getKey()); for (InetSocketAddress addr : entry.getValue()) { s.append(tSplitter); s.append(StringUtils.addrToString(addr)); } } if (state == LedgerMetadataFormat.State.IN_RECOVERY) { s.append(lSplitter).append(IN_RECOVERY).append(tSplitter).append(closed); } else if (state == LedgerMetadataFormat.State.CLOSED) { s.append(lSplitter).append(getLastEntryId()).append(tSplitter).append(closed); } LOG.debug("Serialized config: {}", s); return s.toString().getBytes(); } /** * Parses a given byte array and transforms into a LedgerConfig object * * @param bytes * byte array to parse * @param version * version of the ledger metadata * @return LedgerConfig * @throws IOException * if the given byte[] cannot be parsed */ public static LedgerMetadata parseConfig(byte[] bytes, Version version) throws IOException { LedgerMetadata lc = new LedgerMetadata(); lc.version = version; String config = new String(bytes); LOG.debug("Parsing Config: {}", config); BufferedReader reader = new BufferedReader(new StringReader(config)); String versionLine = reader.readLine(); if (versionLine == null) { throw new IOException("Invalid metadata. Content missing"); } int i = 0; if (versionLine.startsWith(VERSION_KEY)) { String parts[] = versionLine.split(tSplitter); lc.metadataFormatVersion = new Integer(parts[1]); } else { // if no version is set, take it to be version 1 // as the parsing is the same as what we had before // we introduce versions lc.metadataFormatVersion = 1; // reset the reader reader.close(); reader = new BufferedReader(new StringReader(config)); } if (lc.metadataFormatVersion < LOWEST_COMPAT_METADATA_FORMAT_VERSION || lc.metadataFormatVersion > CURRENT_METADATA_FORMAT_VERSION) { throw new IOException("Metadata version not compatible. Expected between " + LOWEST_COMPAT_METADATA_FORMAT_VERSION + " and " + CURRENT_METADATA_FORMAT_VERSION + ", but got " + lc.metadataFormatVersion); } if (lc.metadataFormatVersion == 1) { return parseVersion1Config(lc, reader); } LedgerMetadataFormat.Builder builder = LedgerMetadataFormat.newBuilder(); TextFormat.merge(reader, builder); LedgerMetadataFormat data = builder.build(); lc.writeQuorumSize = data.getQuorumSize(); if (data.hasAckQuorumSize()) { lc.ackQuorumSize = data.getAckQuorumSize(); } else { lc.ackQuorumSize = lc.writeQuorumSize; } lc.ensembleSize = data.getEnsembleSize(); lc.length = data.getLength(); lc.state = data.getState(); lc.lastEntryId = data.getLastEntryId(); if (data.hasPassword()) { lc.digestType = data.getDigestType(); lc.password = data.getPassword().toByteArray(); lc.hasPassword = true; } for (LedgerMetadataFormat.Segment s : data.getSegmentList()) { ArrayList addrs = new ArrayList(); for (String member : s.getEnsembleMemberList()) { addrs.add(StringUtils.parseAddr(member)); } lc.addEnsemble(s.getFirstEntryId(), addrs); } return lc; } static LedgerMetadata parseVersion1Config(LedgerMetadata lc, BufferedReader reader) throws IOException { try { lc.writeQuorumSize = lc.ackQuorumSize = new Integer(reader.readLine()); lc.ensembleSize = new Integer(reader.readLine()); lc.length = new Long(reader.readLine()); String line = reader.readLine(); while (line != null) { String parts[] = line.split(tSplitter); if (parts[1].equals(closed)) { Long l = new Long(parts[0]); if (l == IN_RECOVERY) { lc.state = LedgerMetadataFormat.State.IN_RECOVERY; } else { lc.state = LedgerMetadataFormat.State.CLOSED; lc.lastEntryId = l; } break; } else { lc.state = LedgerMetadataFormat.State.OPEN; } ArrayList addrs = new ArrayList(); for (int j = 1; j < parts.length; j++) { addrs.add(StringUtils.parseAddr(parts[j])); } lc.addEnsemble(new Long(parts[0]), addrs); line = reader.readLine(); } } catch (NumberFormatException e) { throw new IOException(e); } return lc; } /** * Updates the version of this metadata. * * @param v Version */ public void setVersion(Version v) { this.version = v; } /** * Returns the last version. * * @return version */ public Version getVersion() { return this.version; } /** * Is the metadata newer that given newMeta. * * @param newMeta * @return */ boolean isNewerThan(LedgerMetadata newMeta) { if (null == version) { return false; } return Version.Occurred.AFTER == version.compare(newMeta.version); } /** * Is the metadata conflict with new updated metadata. * * @param newMeta * Re-read metadata * @return true if the metadata is conflict. */ boolean isConflictWith(LedgerMetadata newMeta) { /* * if length & close have changed, then another client has * opened the ledger, can't resolve this conflict. */ if (metadataFormatVersion != newMeta.metadataFormatVersion || ensembleSize != newMeta.ensembleSize || writeQuorumSize != newMeta.writeQuorumSize || ackQuorumSize != newMeta.ackQuorumSize || length != newMeta.length || state != newMeta.state || !digestType.equals(newMeta.digestType) || !Arrays.equals(password, newMeta.password)) { return true; } if (state == LedgerMetadataFormat.State.CLOSED && lastEntryId != newMeta.lastEntryId) { return true; } // if ledger is closed, we can just take the new ensembles if (newMeta.state != LedgerMetadataFormat.State.CLOSED) { // allow new metadata to be one ensemble less than current metadata // since ensemble change might kick in when recovery changed metadata int diff = ensembles.size() - newMeta.ensembles.size(); if (0 != diff && 1 != diff) { return true; } // ensemble distribution should be same // we don't check the detail ensemble, since new bookie will be set // using recovery tool. Iterator keyIter = ensembles.keySet().iterator(); Iterator newMetaKeyIter = newMeta.ensembles.keySet().iterator(); for (int i=0; i> newEnsembles) { // allow new metadata to be one ensemble less than current metadata // since ensemble change might kick in when recovery changed metadata int diff = ensembles.size() - newEnsembles.size(); if (0 != diff && 1 != diff) { return; } int i = 0; for (Entry> entry : newEnsembles.entrySet()) { ++i; if (ensembles.size() != i) { // we should use last ensemble from current metadata // not the new metadata read from zookeeper long key = entry.getKey(); ArrayList ensemble = entry.getValue(); ensembles.put(key, ensemble); } } } } LedgerOpenOp.java000066400000000000000000000152651244507361200341130ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.util.Arrays; import java.security.GeneralSecurityException; import org.apache.bookkeeper.client.AsyncCallback.OpenCallback; import org.apache.bookkeeper.client.AsyncCallback.ReadLastConfirmedCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.util.OrderedSafeExecutor.OrderedSafeGenericCallback; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Encapsulates the ledger open operation * */ class LedgerOpenOp implements GenericCallback { static final Logger LOG = LoggerFactory.getLogger(LedgerOpenOp.class); final BookKeeper bk; final long ledgerId; final OpenCallback cb; final Object ctx; LedgerHandle lh; final byte[] passwd; final DigestType digestType; boolean doRecovery = true; boolean administrativeOpen = false; /** * Constructor. * * @param bk * @param ledgerId * @param digestType * @param passwd * @param cb * @param ctx */ public LedgerOpenOp(BookKeeper bk, long ledgerId, DigestType digestType, byte[] passwd, OpenCallback cb, Object ctx) { this.bk = bk; this.ledgerId = ledgerId; this.passwd = passwd; this.cb = cb; this.ctx = ctx; this.digestType = digestType; } public LedgerOpenOp(BookKeeper bk, long ledgerId, OpenCallback cb, Object ctx) { this.bk = bk; this.ledgerId = ledgerId; this.cb = cb; this.ctx = ctx; this.passwd = bk.getConf().getBookieRecoveryPasswd(); this.digestType = bk.getConf().getBookieRecoveryDigestType(); this.administrativeOpen = true; } /** * Inititates the ledger open operation */ public void initiate() { /** * Asynchronously read the ledger metadata node. */ bk.getLedgerManager().readLedgerMetadata(ledgerId, this); } /** * Inititates the ledger open operation without recovery */ public void initiateWithoutRecovery() { this.doRecovery = false; initiate(); } /** * Implements Open Ledger Callback. */ public void operationComplete(int rc, LedgerMetadata metadata) { if (BKException.Code.OK != rc) { // open ledger failed. cb.openComplete(rc, null, this.ctx); return; } final byte[] passwd; final DigestType digestType; /* For an administrative open, the default passwords * are read from the configuration, but if the metadata * already contains passwords, use these instead. */ if (administrativeOpen && metadata.hasPassword()) { passwd = metadata.getPassword(); digestType = metadata.getDigestType(); } else { passwd = this.passwd; digestType = this.digestType; if (metadata.hasPassword()) { if (!Arrays.equals(passwd, metadata.getPassword())) { LOG.error("Provided passwd does not match that in metadata"); cb.openComplete(BKException.Code.UnauthorizedAccessException, null, this.ctx); return; } if (digestType != metadata.getDigestType()) { LOG.error("Provided digest does not match that in metadata"); cb.openComplete(BKException.Code.DigestMatchException, null, this.ctx); return; } } } // get the ledger metadata back try { lh = new ReadOnlyLedgerHandle(bk, ledgerId, metadata, digestType, passwd, !doRecovery); } catch (GeneralSecurityException e) { LOG.error("Security exception while opening ledger: " + ledgerId, e); cb.openComplete(BKException.Code.DigestNotInitializedException, null, this.ctx); return; } catch (NumberFormatException e) { LOG.error("Incorrectly entered parameter throttle: " + bk.getConf().getThrottleValue(), e); cb.openComplete(BKException.Code.IncorrectParameterException, null, this.ctx); return; } if (metadata.isClosed()) { // Ledger was closed properly cb.openComplete(BKException.Code.OK, lh, this.ctx); return; } if (doRecovery) { lh.recover(new OrderedSafeGenericCallback(bk.mainWorkerPool, ledgerId) { @Override public void safeOperationComplete(int rc, Void result) { if (rc == BKException.Code.OK) { cb.openComplete(BKException.Code.OK, lh, LedgerOpenOp.this.ctx); } else if (rc == BKException.Code.UnauthorizedAccessException) { cb.openComplete(BKException.Code.UnauthorizedAccessException, null, LedgerOpenOp.this.ctx); } else { cb.openComplete(BKException.Code.LedgerRecoveryException, null, LedgerOpenOp.this.ctx); } } }); } else { lh.asyncReadLastConfirmed(new ReadLastConfirmedCallback() { @Override public void readLastConfirmedComplete(int rc, long lastConfirmed, Object ctx) { if (rc != BKException.Code.OK) { cb.openComplete(BKException.Code.ReadException, null, LedgerOpenOp.this.ctx); } else { lh.lastAddConfirmed = lh.lastAddPushed = lastConfirmed; cb.openComplete(BKException.Code.OK, lh, LedgerOpenOp.this.ctx); } } }, null); } } } LedgerRecoveryOp.java000066400000000000000000000153221244507361200350020ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.util.Enumeration; import java.util.concurrent.ScheduledExecutorService; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.AsyncCallback.CloseCallback; import org.apache.bookkeeper.client.AsyncCallback.ReadCallback; import org.apache.bookkeeper.client.DigestManager.RecoveryData; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.zookeeper.KeeperException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This class encapsulated the ledger recovery operation. It first does a read * with entry-id of -1 (BookieProtocol.LAST_ADD_CONFIRMED) to all bookies. Then * starting from the last confirmed entry (from hints in the ledger entries), * it reads forward until it is not able to find a particular entry. It closes * the ledger at that entry. * */ class LedgerRecoveryOp implements ReadCallback, AddCallback { static final Logger LOG = LoggerFactory.getLogger(LedgerRecoveryOp.class); LedgerHandle lh; int numResponsesPending; boolean proceedingWithRecovery = false; long maxAddPushed = LedgerHandle.INVALID_ENTRY_ID; long maxAddConfirmed = LedgerHandle.INVALID_ENTRY_ID; long maxLength = 0; // keep a copy of metadata for recovery. LedgerMetadata metadataForRecovery; GenericCallback cb; class RecoveryReadOp extends PendingReadOp { RecoveryReadOp(LedgerHandle lh, ScheduledExecutorService scheduler, long startEntryId, long endEntryId, ReadCallback cb, Object ctx) { super(lh, scheduler, startEntryId, endEntryId, cb, ctx); } @Override protected LedgerMetadata getLedgerMetadata() { return metadataForRecovery; } } public LedgerRecoveryOp(LedgerHandle lh, GenericCallback cb) { this.cb = cb; this.lh = lh; numResponsesPending = lh.metadata.getEnsembleSize(); } public void initiate() { ReadLastConfirmedOp rlcop = new ReadLastConfirmedOp(lh, new ReadLastConfirmedOp.LastConfirmedDataCallback() { public void readLastConfirmedDataComplete(int rc, RecoveryData data) { if (rc == BKException.Code.OK) { lh.lastAddPushed = lh.lastAddConfirmed = data.lastAddConfirmed; lh.length = data.length; // keep a copy of ledger metadata before proceeding // ledger recovery metadataForRecovery = new LedgerMetadata(lh.getLedgerMetadata()); doRecoveryRead(); } else if (rc == BKException.Code.UnauthorizedAccessException) { cb.operationComplete(rc, null); } else { cb.operationComplete(BKException.Code.ReadException, null); } } }); /** * Enable fencing on this op. When the read request reaches the bookies * server it will fence off the ledger, stopping any subsequent operation * from writing to it. */ rlcop.initiateWithFencing(); } /** * Try to read past the last confirmed. */ private void doRecoveryRead() { long nextEntry = lh.lastAddConfirmed + 1; try { new RecoveryReadOp(lh, lh.bk.scheduler, nextEntry, nextEntry, this, null).initiate(); } catch (InterruptedException e) { readComplete(BKException.Code.InterruptedException, lh, null, null); } } @Override public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { if (rc == BKException.Code.OK) { LedgerEntry entry = seq.nextElement(); byte[] data = entry.getEntry(); /* * We will add this entry again to make sure it is written to enough * replicas. We subtract the length of the data itself, since it will * be added again when processing the call to add it. */ synchronized (lh) { lh.length = entry.getLength() - (long) data.length; } lh.asyncRecoveryAddEntry(data, 0, data.length, this, null); return; } if (rc == BKException.Code.NoSuchEntryException || rc == BKException.Code.NoSuchLedgerExistsException) { lh.asyncCloseInternal(new CloseCallback() { @Override public void closeComplete(int rc, LedgerHandle lh, Object ctx) { if (rc != BKException.Code.OK) { LOG.warn("Close failed: " + BKException.getMessage(rc)); cb.operationComplete(rc, null); } else { cb.operationComplete(BKException.Code.OK, null); LOG.debug("After closing length is: {}", lh.getLength()); } } }, null, BKException.Code.LedgerClosedException); return; } // otherwise, some other error, we can't handle LOG.error("Failure " + BKException.getMessage(rc) + " while reading entry: " + (lh.lastAddConfirmed + 1) + " ledger: " + lh.ledgerId + " while recovering ledger"); cb.operationComplete(rc, null); return; } @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { if (rc != BKException.Code.OK) { // Give up, we can't recover from this error LOG.error("Failure " + BKException.getMessage(rc) + " while writing entry: " + (lh.lastAddConfirmed + 1) + " ledger: " + lh.ledgerId + " while recovering ledger"); cb.operationComplete(rc, null); return; } doRecoveryRead(); } } MacDigestManager.java000066400000000000000000000051201244507361200347100ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.security.GeneralSecurityException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import org.slf4j.Logger; import org.slf4j.LoggerFactory; class MacDigestManager extends DigestManager { final static Logger LOG = LoggerFactory.getLogger(MacDigestManager.class); public static String DIGEST_ALGORITHM = "SHA-1"; public static String KEY_ALGORITHM = "HmacSHA1"; final byte[] passwd; private final ThreadLocal mac = new ThreadLocal() { @Override protected Mac initialValue() { try { byte[] macKey = genDigest("mac", passwd); SecretKeySpec keySpec = new SecretKeySpec(macKey, KEY_ALGORITHM); Mac mac = Mac.getInstance(KEY_ALGORITHM); mac.init(keySpec); return mac; } catch (GeneralSecurityException gse) { LOG.error("Couldn't not get mac instance", gse); return null; } } }; public MacDigestManager(long ledgerId, byte[] passwd) throws GeneralSecurityException { super(ledgerId); this.passwd = passwd; } static byte[] genDigest(String pad, byte[] passwd) throws NoSuchAlgorithmException { MessageDigest digest = MessageDigest.getInstance(DIGEST_ALGORITHM); digest.update(pad.getBytes()); digest.update(passwd); return digest.digest(); } @Override int getMacCodeLength() { return 20; } @Override byte[] getValueAndReset() { return mac.get().doFinal(); } @Override void update(byte[] data, int offset, int length) { mac.get().update(data, offset, length); } } PendingAddOp.java000066400000000000000000000162631244507361200340630ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.client; import java.util.HashSet; import java.util.Set; import java.net.InetSocketAddress; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.proto.BookieProtocol; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.jboss.netty.buffer.ChannelBuffer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This represents a pending add operation. When it has got success from all * bookies, it sees if its at the head of the pending adds queue, and if yes, * sends ack back to the application. If a bookie fails, a replacement is made * and placed at the same position in the ensemble. The pending adds are then * rereplicated. * * */ class PendingAddOp implements WriteCallback { final static Logger LOG = LoggerFactory.getLogger(PendingAddOp.class); ChannelBuffer toSend; AddCallback cb; Object ctx; long entryId; int entryLength; Set writeSet; DistributionSchedule.AckSet ackSet; boolean completed = false; LedgerHandle lh; boolean isRecoveryAdd = false; PendingAddOp(LedgerHandle lh, AddCallback cb, Object ctx) { this.lh = lh; this.cb = cb; this.ctx = ctx; this.entryId = LedgerHandle.INVALID_ENTRY_ID; ackSet = lh.distributionSchedule.getAckSet(); } /** * Enable the recovery add flag for this operation. * @see LedgerHandle#asyncRecoveryAddEntry */ PendingAddOp enableRecoveryAdd() { isRecoveryAdd = true; return this; } void setEntryId(long entryId) { this.entryId = entryId; writeSet = new HashSet(lh.distributionSchedule.getWriteSet(entryId)); } void sendWriteRequest(int bookieIndex) { int flags = isRecoveryAdd ? BookieProtocol.FLAG_RECOVERY_ADD : BookieProtocol.FLAG_NONE; lh.bk.bookieClient.addEntry(lh.metadata.currentEnsemble.get(bookieIndex), lh.ledgerId, lh.ledgerKey, entryId, toSend, this, bookieIndex, flags); } void unsetSuccessAndSendWriteRequest(int bookieIndex) { if (toSend == null) { // this addOp hasn't yet had its mac computed. When the mac is // computed, its write requests will be sent, so no need to send it // now return; } // Suppose that unset doesn't happen on the write set of an entry. In this // case we don't need to resend the write request upon an ensemble change. // We do need to invoke #sendAddSuccessCallbacks() for such entries because // they may have already completed, but they are just waiting for the ensemble // to change. // E.g. // ensemble (A, B, C, D), entry k is written to (A, B, D). An ensemble change // happens to replace C with E. Entry k does not complete until C is // replaced with E successfully. When the ensemble change completes, it tries // to unset entry k. C however is not in k's write set, so no entry is written // again, and no one triggers #sendAddSuccessCallbacks. Consequently, k never // completes. // // We call sendAddSuccessCallback when unsetting t cover this case. if (!writeSet.contains(bookieIndex)) { lh.sendAddSuccessCallbacks(); return; } if (LOG.isDebugEnabled()) { LOG.debug("Unsetting success for ledger: " + lh.ledgerId + " entry: " + entryId + " bookie index: " + bookieIndex); } // if we had already heard a success from this array index, need to // increment our number of responses that are pending, since we are // going to unset this success ackSet.removeBookie(bookieIndex); completed = false; sendWriteRequest(bookieIndex); } void initiate(ChannelBuffer toSend, int entryLength) { this.toSend = toSend; this.entryLength = entryLength; for (int bookieIndex : writeSet) { sendWriteRequest(bookieIndex); } } @Override public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { int bookieIndex = (Integer) ctx; if (completed) { // I am already finished, ignore incoming responses. // otherwise, we might hit the following error handling logic, which might cause bad things. return; } switch (rc) { case BKException.Code.OK: // continue break; case BKException.Code.LedgerFencedException: LOG.warn("Fencing exception on write: L{} E{} on {}", new Object[] { ledgerId, entryId, addr }); lh.handleUnrecoverableErrorDuringAdd(rc); return; case BKException.Code.UnauthorizedAccessException: LOG.warn("Unauthorized access exception on write: L{} E{} on {}", new Object[] { ledgerId, entryId, addr }); lh.handleUnrecoverableErrorDuringAdd(rc); return; default: LOG.warn("Write did not succeed: L{} E{} on {}", new Object[] { ledgerId, entryId, addr }); lh.handleBookieFailure(addr, bookieIndex); return; } if (!writeSet.contains(bookieIndex)) { LOG.warn("Received a response for (lid:{}, eid:{}) from {}@{}, but it doesn't belong to {}.", new Object[] { ledgerId, entryId, addr, bookieIndex, writeSet }); return; } if (ackSet.addBookieAndCheck(bookieIndex) && !completed) { completed = true; LOG.debug("Complete (lid:{}, eid:{}).", ledgerId, entryId); // when completed an entry, try to send success add callbacks in order lh.sendAddSuccessCallbacks(); } } void submitCallback(final int rc) { if (rc != BKException.Code.OK) { LOG.error("Write of ledger entry to quorum failed: L{} E{}", lh.getId(), entryId); } cb.addComplete(rc, lh, entryId, ctx); } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("PendingAddOp(lid:").append(lh.ledgerId) .append(", eid:").append(entryId).append(", completed:") .append(completed).append(")"); return sb.toString(); } } PendingReadOp.java000066400000000000000000000342641244507361200342470ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.BitSet; import java.util.Enumeration; import java.util.HashSet; import java.util.List; import java.util.NoSuchElementException; import java.util.Queue; import java.util.Set; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.bookkeeper.client.AsyncCallback.ReadCallback; import org.apache.bookkeeper.client.BKException.BKDigestMatchException; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.ReadEntryCallback; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBufferInputStream; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Sequence of entries of a ledger that represents a pending read operation. * When all the data read has come back, the application callback is called. * This class could be improved because we could start pushing data to the * application as soon as it arrives rather than waiting for the whole thing. * */ class PendingReadOp implements Enumeration, ReadEntryCallback { Logger LOG = LoggerFactory.getLogger(PendingReadOp.class); final int speculativeReadTimeout; final private ScheduledExecutorService scheduler; private ScheduledFuture speculativeTask = null; Queue seq; Set heardFromHosts; ReadCallback cb; Object ctx; LedgerHandle lh; long numPendingEntries; long startEntryId; long endEntryId; final int maxMissedReadsAllowed; class LedgerEntryRequest extends LedgerEntry { final static int NOT_FOUND = -1; int nextReplicaIndexToReadFrom = 0; AtomicBoolean complete = new AtomicBoolean(false); int firstError = BKException.Code.OK; int numMissedEntryReads = 0; final ArrayList ensemble; final List writeSet; final BitSet sentReplicas; final BitSet erroredReplicas; LedgerEntryRequest(ArrayList ensemble, long lId, long eId) { super(lId, eId); this.ensemble = ensemble; this.writeSet = lh.distributionSchedule.getWriteSet(entryId); this.sentReplicas = new BitSet(lh.getLedgerMetadata().getWriteQuorumSize()); this.erroredReplicas = new BitSet(lh.getLedgerMetadata().getWriteQuorumSize()); } private int getReplicaIndex(InetSocketAddress host) { int bookieIndex = ensemble.indexOf(host); if (bookieIndex == -1) { return NOT_FOUND; } return writeSet.indexOf(bookieIndex); } private BitSet getSentToBitSet() { BitSet b = new BitSet(ensemble.size()); for (int i = 0; i < sentReplicas.length(); i++) { if (sentReplicas.get(i)) { b.set(writeSet.get(i)); } } return b; } private BitSet getHeardFromBitSet(Set heardFromHosts) { BitSet b = new BitSet(ensemble.size()); for (InetSocketAddress i : heardFromHosts) { int index = ensemble.indexOf(i); if (index != -1) { b.set(index); } } return b; } private boolean readsOutstanding() { return (sentReplicas.cardinality() - erroredReplicas.cardinality()) > 0; } /** * Send to next replica speculatively, if required and possible. * This returns the host we may have sent to for unit testing. * @return host we sent to if we sent. null otherwise. */ synchronized InetSocketAddress maybeSendSpeculativeRead(Set heardFromHosts) { if (nextReplicaIndexToReadFrom >= getLedgerMetadata().getWriteQuorumSize()) { return null; } BitSet sentTo = getSentToBitSet(); BitSet heardFrom = getHeardFromBitSet(heardFromHosts); sentTo.and(heardFrom); // only send another read, if we have had no response at all (even for other entries) // from any of the other bookies we have sent the request to if (sentTo.cardinality() == 0) { return sendNextRead(); } else { return null; } } synchronized InetSocketAddress sendNextRead() { if (nextReplicaIndexToReadFrom >= getLedgerMetadata().getWriteQuorumSize()) { // we are done, the read has failed from all replicas, just fail the // read // Do it a bit pessimistically, only when finished trying all replicas // to check whether we received more missed reads than maxMissedReadsAllowed if (BKException.Code.BookieHandleNotAvailableException == firstError && numMissedEntryReads > maxMissedReadsAllowed) { firstError = BKException.Code.NoSuchEntryException; } submitCallback(firstError); return null; } int replica = nextReplicaIndexToReadFrom; int bookieIndex = lh.distributionSchedule.getWriteSet(entryId).get(nextReplicaIndexToReadFrom); nextReplicaIndexToReadFrom++; try { InetSocketAddress to = ensemble.get(bookieIndex); sendReadTo(to, this); sentReplicas.set(replica); return to; } catch (InterruptedException ie) { LOG.error("Interrupted reading entry " + this, ie); Thread.currentThread().interrupt(); submitCallback(BKException.Code.ReadException); return null; } } synchronized void logErrorAndReattemptRead(InetSocketAddress host, String errMsg, int rc) { if (BKException.Code.OK == firstError || BKException.Code.NoSuchEntryException == firstError) { firstError = rc; } else if (BKException.Code.BookieHandleNotAvailableException == firstError && BKException.Code.NoSuchEntryException != rc) { // if other exception rather than NoSuchEntryException is returned // we need to update firstError to indicate that it might be a valid read but just failed. firstError = rc; } if (BKException.Code.NoSuchEntryException == rc) { ++numMissedEntryReads; LOG.debug("No such entry found on bookie. L{} E{} bookie: {}", new Object[] { lh.ledgerId, entryId, host }); } else { LOG.debug(errMsg + " while reading L{} E{} from bookie: {}", new Object[] { lh.ledgerId, entryId, host }); } int replica = getReplicaIndex(host); if (replica == NOT_FOUND) { LOG.error("Received error from a host which is not in the ensemble {} {}.", host, ensemble); return; } erroredReplicas.set(replica); if (!readsOutstanding()) { sendNextRead(); } } // return true if we managed to complete the entry // return false if the read entry is not complete or it is already completed before boolean complete(InetSocketAddress host, final ChannelBuffer buffer) { ChannelBufferInputStream is; try { is = lh.macManager.verifyDigestAndReturnData(entryId, buffer); } catch (BKDigestMatchException e) { logErrorAndReattemptRead(host, "Mac mismatch", BKException.Code.DigestMatchException); return false; } if (!complete.getAndSet(true)) { entryDataStream = is; /* * The length is a long and it is the last field of the metadata of an entry. * Consequently, we have to subtract 8 from METADATA_LENGTH to get the length. */ length = buffer.getLong(DigestManager.METADATA_LENGTH - 8); return true; } else { return false; } } boolean isComplete() { return complete.get(); } public String toString() { return String.format("L%d-E%d", ledgerId, entryId); } } PendingReadOp(LedgerHandle lh, ScheduledExecutorService scheduler, long startEntryId, long endEntryId, ReadCallback cb, Object ctx) { seq = new ArrayBlockingQueue((int) ((endEntryId + 1) - startEntryId)); this.cb = cb; this.ctx = ctx; this.lh = lh; this.startEntryId = startEntryId; this.endEntryId = endEntryId; this.scheduler = scheduler; numPendingEntries = endEntryId - startEntryId + 1; maxMissedReadsAllowed = getLedgerMetadata().getWriteQuorumSize() - getLedgerMetadata().getAckQuorumSize(); speculativeReadTimeout = lh.bk.getConf().getSpeculativeReadTimeout(); heardFromHosts = new HashSet(); } protected LedgerMetadata getLedgerMetadata() { return lh.metadata; } public void initiate() throws InterruptedException { long nextEnsembleChange = startEntryId, i = startEntryId; ArrayList ensemble = null; if (speculativeReadTimeout > 0) { speculativeTask = scheduler.scheduleWithFixedDelay(new Runnable() { public void run() { int x = 0; for (LedgerEntryRequest r : seq) { if (!r.isComplete()) { if (null != r.maybeSendSpeculativeRead(heardFromHosts)) { LOG.debug("Send speculative read for {}. Hosts heard are {}.", r, heardFromHosts); ++x; } } } if (x > 0) { LOG.debug("Send {} speculative reads for ledger {} ({}, {}). Hosts heard are {}.", new Object[] { x, lh.getId(), startEntryId, endEntryId, heardFromHosts }); } } }, speculativeReadTimeout, speculativeReadTimeout, TimeUnit.MILLISECONDS); } do { if (i == nextEnsembleChange) { ensemble = getLedgerMetadata().getEnsemble(i); nextEnsembleChange = getLedgerMetadata().getNextEnsembleChange(i); } LedgerEntryRequest entry = new LedgerEntryRequest(ensemble, lh.ledgerId, i); seq.add(entry); i++; entry.sendNextRead(); } while (i <= endEntryId); } private static class ReadContext { final InetSocketAddress to; final LedgerEntryRequest entry; ReadContext(InetSocketAddress to, LedgerEntryRequest entry) { this.to = to; this.entry = entry; } } void sendReadTo(InetSocketAddress to, LedgerEntryRequest entry) throws InterruptedException { lh.throttler.acquire(); lh.bk.bookieClient.readEntry(to, lh.ledgerId, entry.entryId, this, new ReadContext(to, entry)); } @Override public void readEntryComplete(int rc, long ledgerId, final long entryId, final ChannelBuffer buffer, Object ctx) { final ReadContext rctx = (ReadContext)ctx; final LedgerEntryRequest entry = rctx.entry; if (rc != BKException.Code.OK) { entry.logErrorAndReattemptRead(rctx.to, "Error: " + BKException.getMessage(rc), rc); return; } heardFromHosts.add(rctx.to); if (entry.complete(rctx.to, buffer)) { numPendingEntries--; if (numPendingEntries == 0) { submitCallback(BKException.Code.OK); } } if(numPendingEntries < 0) LOG.error("Read too many values"); } private void submitCallback(int code) { if (speculativeTask != null) { speculativeTask.cancel(true); speculativeTask = null; } if (code != BKException.Code.OK) { long firstUnread = LedgerHandle.INVALID_ENTRY_ID; for (LedgerEntryRequest req : seq) { if (!req.isComplete()) { firstUnread = req.getEntryId(); break; } } LOG.error("Read of ledger entry failed: L{} E{}-E{}, Heard from {}. First unread entry is {}", new Object[] { lh.getId(), startEntryId, endEntryId, heardFromHosts, firstUnread }); } cb.readComplete(code, lh, PendingReadOp.this, PendingReadOp.this.ctx); } public boolean hasMoreElements() { return !seq.isEmpty(); } public LedgerEntry nextElement() throws NoSuchElementException { return seq.remove(); } public int size() { return seq.size(); } } ReadLastConfirmedOp.java000066400000000000000000000125731244507361200354140ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.client; import org.apache.bookkeeper.client.BKException.BKDigestMatchException; import org.apache.bookkeeper.client.DigestManager.RecoveryData; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.ReadEntryCallback; import org.apache.bookkeeper.proto.BookieProtocol; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.buffer.ChannelBuffer; /** * This class encapsulated the read last confirmed operation. * */ class ReadLastConfirmedOp implements ReadEntryCallback { static final Logger LOG = LoggerFactory.getLogger(ReadLastConfirmedOp.class); LedgerHandle lh; int numResponsesPending; RecoveryData maxRecoveredData; volatile boolean completed = false; LastConfirmedDataCallback cb; final DistributionSchedule.QuorumCoverageSet coverageSet; /** * Wrapper to get all recovered data from the request */ interface LastConfirmedDataCallback { public void readLastConfirmedDataComplete(int rc, RecoveryData data); } public ReadLastConfirmedOp(LedgerHandle lh, LastConfirmedDataCallback cb) { this.cb = cb; this.maxRecoveredData = new RecoveryData(LedgerHandle.INVALID_ENTRY_ID, 0); this.lh = lh; this.numResponsesPending = lh.metadata.getEnsembleSize(); this.coverageSet = lh.distributionSchedule.getCoverageSet(); } public void initiate() { for (int i = 0; i < lh.metadata.currentEnsemble.size(); i++) { lh.bk.bookieClient.readEntry(lh.metadata.currentEnsemble.get(i), lh.ledgerId, BookieProtocol.LAST_ADD_CONFIRMED, this, i); } } public void initiateWithFencing() { for (int i = 0; i < lh.metadata.currentEnsemble.size(); i++) { lh.bk.bookieClient.readEntryAndFenceLedger(lh.metadata.currentEnsemble.get(i), lh.ledgerId, lh.ledgerKey, BookieProtocol.LAST_ADD_CONFIRMED, this, i); } } public synchronized void readEntryComplete(final int rc, final long ledgerId, final long entryId, final ChannelBuffer buffer, final Object ctx) { int bookieIndex = (Integer) ctx; numResponsesPending--; boolean heardValidResponse = false; if (rc == BKException.Code.OK) { try { RecoveryData recoveryData = lh.macManager.verifyDigestAndReturnLastConfirmed(buffer); if (recoveryData.lastAddConfirmed > maxRecoveredData.lastAddConfirmed) { maxRecoveredData = recoveryData; } heardValidResponse = true; } catch (BKDigestMatchException e) { // Too bad, this bookie didn't give us a valid answer, we // still might be able to recover though so continue LOG.error("Mac mismatch for ledger: " + ledgerId + ", entry: " + entryId + " while reading last entry from bookie: " + lh.metadata.currentEnsemble.get(bookieIndex)); } } if (rc == BKException.Code.NoSuchLedgerExistsException || rc == BKException.Code.NoSuchEntryException) { // this still counts as a valid response, e.g., if the client crashed without writing any entry heardValidResponse = true; } if (rc == BKException.Code.UnauthorizedAccessException && !completed) { cb.readLastConfirmedDataComplete(rc, maxRecoveredData); completed = true; } // other return codes dont count as valid responses if (heardValidResponse && coverageSet.addBookieAndCheckCovered(bookieIndex) && !completed) { completed = true; LOG.debug("Read Complete with enough validResponses for ledger: {}, entry: {}", ledgerId, entryId); cb.readLastConfirmedDataComplete(BKException.Code.OK, maxRecoveredData); return; } if (numResponsesPending == 0 && !completed) { // Have got all responses back but was still not enough, just fail the operation LOG.error("While readLastConfirmed ledger: " + ledgerId + " did not hear success responses from all quorums"); cb.readLastConfirmedDataComplete(BKException.Code.LedgerRecoveryException, maxRecoveredData); } } } ReadOnlyLedgerHandle.java000066400000000000000000000146451244507361200355450ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.AsyncCallback.CloseCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.LedgerMetadataListener; import org.apache.bookkeeper.util.SafeRunnable; import org.apache.bookkeeper.versioning.Version; import java.security.GeneralSecurityException; import java.net.InetSocketAddress; import java.util.concurrent.RejectedExecutionException; /** * Read only ledger handle. This ledger handle allows you to * read from a ledger but not to write to it. It overrides all * the public write operations from LedgerHandle. * It should be returned for BookKeeper#openLedger operations. */ class ReadOnlyLedgerHandle extends LedgerHandle implements LedgerMetadataListener { class MetadataUpdater extends SafeRunnable { final LedgerMetadata m; MetadataUpdater(LedgerMetadata metadata) { this.m = metadata; } @Override public void safeRun() { Version.Occurred occurred = ReadOnlyLedgerHandle.this.metadata.getVersion().compare(this.m.getVersion()); if (Version.Occurred.BEFORE == occurred) { LOG.info("Updated ledger metadata for ledger {} to {}.", ledgerId, this.m); ReadOnlyLedgerHandle.this.metadata = this.m; } } } ReadOnlyLedgerHandle(BookKeeper bk, long ledgerId, LedgerMetadata metadata, DigestType digestType, byte[] password, boolean watch) throws GeneralSecurityException, NumberFormatException { super(bk, ledgerId, metadata, digestType, password); if (watch) { bk.getLedgerManager().registerLedgerMetadataListener(ledgerId, this); } } @Override public void close() throws InterruptedException, BKException { bk.getLedgerManager().unregisterLedgerMetadataListener(ledgerId, this); } @Override public void asyncClose(CloseCallback cb, Object ctx) { bk.getLedgerManager().unregisterLedgerMetadataListener(ledgerId, this); cb.closeComplete(BKException.Code.OK, this, ctx); } @Override public long addEntry(byte[] data) throws InterruptedException, BKException { return addEntry(data, 0, data.length); } @Override public long addEntry(byte[] data, int offset, int length) throws InterruptedException, BKException { LOG.error("Tried to add entry on a Read-Only ledger handle, ledgerid=" + ledgerId); throw BKException.create(BKException.Code.IllegalOpException); } @Override public void asyncAddEntry(final byte[] data, final AddCallback cb, final Object ctx) { asyncAddEntry(data, 0, data.length, cb, ctx); } @Override public void asyncAddEntry(final byte[] data, final int offset, final int length, final AddCallback cb, final Object ctx) { LOG.error("Tried to add entry on a Read-Only ledger handle, ledgerid=" + ledgerId); cb.addComplete(BKException.Code.IllegalOpException, this, LedgerHandle.INVALID_ENTRY_ID, ctx); } @Override void handleBookieFailure(final InetSocketAddress addr, final int bookieIndex) { blockAddCompletions.incrementAndGet(); synchronized (metadata) { try { if (!metadata.currentEnsemble.get(bookieIndex).equals(addr)) { // ensemble has already changed, failure of this addr is immaterial LOG.debug("Write did not succeed to {}, bookieIndex {}," +" but we have already fixed it.", addr, bookieIndex); blockAddCompletions.decrementAndGet(); return; } replaceBookieInMetadata(addr, bookieIndex); blockAddCompletions.decrementAndGet(); // the failed bookie has been replaced unsetSuccessAndSendWriteRequest(bookieIndex); } catch (BKException.BKNotEnoughBookiesException e) { LOG.error("Could not get additional bookie to " + "remake ensemble, closing ledger: " + ledgerId); handleUnrecoverableErrorDuringAdd(e.getCode()); return; } } } @Override public void onChanged(long lid, LedgerMetadata newMetadata) { if (LOG.isDebugEnabled()) { LOG.debug("Received ledger metadata update on {} : {}", lid, newMetadata); } if (this.ledgerId != lid) { return; } if (null == newMetadata) { return; } Version.Occurred occurred = this.metadata.getVersion().compare(newMetadata.getVersion()); if (LOG.isDebugEnabled()) { LOG.debug("Try to update metadata from {} to {} : {}", new Object[] { this.metadata, newMetadata, occurred }); } if (Version.Occurred.BEFORE == occurred) { // the metadata is updated try { bk.mainWorkerPool.submitOrdered(ledgerId, new MetadataUpdater(newMetadata)); } catch (RejectedExecutionException ree) { LOG.error("Failed on submitting updater to update ledger metadata on ledger {} : {}", ledgerId, newMetadata); } } } @Override public String toString() { return String.format("ReadOnlyLedgerHandle(lid = %d, id = %d)", ledgerId, super.hashCode()); } } RoundRobinDistributionSchedule.java000066400000000000000000000071101244507361200377140ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.client; import org.apache.bookkeeper.util.MathUtils; import java.util.List; import java.util.ArrayList; import java.util.HashSet; /** * A specific {@link DistributionSchedule} that places entries in round-robin * fashion. For ensemble size 3, and quorum size 2, Entry 0 goes to bookie 0 and * 1, entry 1 goes to bookie 1 and 2, and entry 2 goes to bookie 2 and 0, and so * on. * */ class RoundRobinDistributionSchedule implements DistributionSchedule { private int writeQuorumSize; private int ackQuorumSize; private int ensembleSize; public RoundRobinDistributionSchedule(int writeQuorumSize, int ackQuorumSize, int ensembleSize) { this.writeQuorumSize = writeQuorumSize; this.ackQuorumSize = ackQuorumSize; this.ensembleSize = ensembleSize; } @Override public List getWriteSet(long entryId) { List set = new ArrayList(); for (int i = 0; i < this.writeQuorumSize; i++) { set.add((int)((entryId + i) % ensembleSize)); } return set; } @Override public AckSet getAckSet() { final HashSet ackSet = new HashSet(); return new AckSet() { public boolean addBookieAndCheck(int bookieIndexHeardFrom) { ackSet.add(bookieIndexHeardFrom); return ackSet.size() >= ackQuorumSize; } public void removeBookie(int bookie) { ackSet.remove(bookie); } }; } private class RRQuorumCoverageSet implements QuorumCoverageSet { private final boolean[] covered = new boolean[ensembleSize]; private RRQuorumCoverageSet() { for (int i = 0; i < covered.length; i++) { covered[i] = false; } } public synchronized boolean addBookieAndCheckCovered(int bookieIndexHeardFrom) { covered[bookieIndexHeardFrom] = true; // now check if there are any write quorums, with |ackQuorum| nodes available for (int i = 0; i < ensembleSize; i++) { int nodesNotCovered = 0; for (int j = 0; j < writeQuorumSize; j++) { int nodeIndex = (i + j) % ensembleSize; if (!covered[nodeIndex]) { nodesNotCovered++; } } if (nodesNotCovered >= ackQuorumSize) { return false; } } return true; } } @Override public QuorumCoverageSet getCoverageSet() { return new RRQuorumCoverageSet(); } @Override public boolean hasEntry(long entryId, int bookieIndex) { return getWriteSet(entryId).contains(bookieIndex); } } SyncCounter.java000066400000000000000000000035511244507361200340370ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.util.Enumeration; /** * Implements objects to help with the synchronization of asynchronous calls * */ class SyncCounter { int i; int rc; int total; Enumeration seq = null; LedgerHandle lh = null; synchronized void inc() { i++; total++; } synchronized void dec() { i--; notifyAll(); } synchronized void block(int limit) throws InterruptedException { while (i > limit) { int prev = i; wait(); if (i == prev) { break; } } } synchronized int total() { return total; } void setrc(int rc) { this.rc = rc; } int getrc() { return rc; } void setSequence(Enumeration seq) { this.seq = seq; } Enumeration getSequence() { return seq; } void setLh(LedgerHandle lh) { this.lh = lh; } LedgerHandle getLh() { return lh; } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/conf/000077500000000000000000000000001244507361200304425ustar00rootroot00000000000000AbstractConfiguration.java000066400000000000000000000167451244507361200355360ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/conf/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.conf; import java.net.URL; import org.apache.commons.configuration.CompositeConfiguration; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; import org.apache.commons.configuration.PropertiesConfiguration; import org.apache.commons.configuration.SystemConfiguration; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.util.ReflectionUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Abstract configuration */ public abstract class AbstractConfiguration extends CompositeConfiguration { static final Logger LOG = LoggerFactory.getLogger(AbstractConfiguration.class); private static ClassLoader defaultLoader; static { defaultLoader = Thread.currentThread().getContextClassLoader(); if (null == defaultLoader) { defaultLoader = AbstractConfiguration.class.getClassLoader(); } } // Ledger Manager protected final static String LEDGER_MANAGER_TYPE = "ledgerManagerType"; protected final static String LEDGER_MANAGER_FACTORY_CLASS = "ledgerManagerFactoryClass"; protected final static String ZK_LEDGERS_ROOT_PATH = "zkLedgersRootPath"; protected final static String AVAILABLE_NODE = "available"; protected final static String REREPLICATION_ENTRY_BATCH_SIZE = "rereplicationEntryBatchSize"; // Metastore settings, only being used when LEDGER_MANAGER_FACTORY_CLASS is MSLedgerManagerFactory protected final static String METASTORE_IMPL_CLASS = "metastoreImplClass"; protected final static String METASTORE_MAX_ENTRIES_PER_SCAN = "metastoreMaxEntriesPerScan"; protected AbstractConfiguration() { super(); // add configuration for system properties addConfiguration(new SystemConfiguration()); } /** * You can load configurations in precedence order. The first one takes * precedence over any loaded later. * * @param confURL * Configuration URL */ public void loadConf(URL confURL) throws ConfigurationException { Configuration loadedConf = new PropertiesConfiguration(confURL); addConfiguration(loadedConf); } /** * You can load configuration from other configuration * * @param baseConf * Other Configuration */ public void loadConf(AbstractConfiguration baseConf) { addConfiguration(baseConf); } /** * Load configuration from other configuration object * * @param otherConf * Other configuration object */ public void loadConf(Configuration otherConf) { addConfiguration(otherConf); } /** * Set Ledger Manager Type. * * @param lmType * Ledger Manager Type * @deprecated replaced by {@link #setLedgerManagerFactoryClass} */ @Deprecated public void setLedgerManagerType(String lmType) { setProperty(LEDGER_MANAGER_TYPE, lmType); } /** * Get Ledger Manager Type. * * @return ledger manager type * @throws ConfigurationException * @deprecated replaced by {@link #getLedgerManagerFactoryClass()} */ @Deprecated public String getLedgerManagerType() { return getString(LEDGER_MANAGER_TYPE); } /** * Set Ledger Manager Factory Class Name. * * @param factoryClassName * Ledger Manager Factory Class Name */ public void setLedgerManagerFactoryClassName(String factoryClassName) { setProperty(LEDGER_MANAGER_FACTORY_CLASS, factoryClassName); } /** * Set Ledger Manager Factory Class. * * @param factoryClass * Ledger Manager Factory Class */ public void setLedgerManagerFactoryClass(Class factoryClass) { setProperty(LEDGER_MANAGER_FACTORY_CLASS, factoryClass.getName()); } /** * Get ledger manager factory class. * * @return ledger manager factory class */ public Class getLedgerManagerFactoryClass() throws ConfigurationException { return ReflectionUtils.getClass(this, LEDGER_MANAGER_FACTORY_CLASS, null, LedgerManagerFactory.class, defaultLoader); } /** * Set Zk Ledgers Root Path. * * @param zkLedgersPath zk ledgers root path */ public void setZkLedgersRootPath(String zkLedgersPath) { setProperty(ZK_LEDGERS_ROOT_PATH, zkLedgersPath); } /** * Get Zk Ledgers Root Path. * * @return zk ledgers root path */ public String getZkLedgersRootPath() { return getString(ZK_LEDGERS_ROOT_PATH, "/ledgers"); } /** * Get the node under which available bookies are stored * * @return Node under which available bookies are stored. */ public String getZkAvailableBookiesPath() { return getZkLedgersRootPath() + "/" + AVAILABLE_NODE; } /** * Set the max entries to keep in fragment for re-replication. If fragment * has more entries than this count, then the original fragment will be * split into multiple small logical fragments by keeping max entries count * to rereplicationEntryBatchSize. So, re-replication will happen in batches * wise. */ public void setRereplicationEntryBatchSize(long rereplicationEntryBatchSize) { setProperty(REREPLICATION_ENTRY_BATCH_SIZE, rereplicationEntryBatchSize); } /** * Get the re-replication entry batch size */ public long getRereplicationEntryBatchSize() { return getLong(REREPLICATION_ENTRY_BATCH_SIZE, 10); } /** * Get metastore implementation class. * * @return metastore implementation class name. */ public String getMetastoreImplClass() { return getString(METASTORE_IMPL_CLASS); } /** * Set metastore implementation class. * * @param metastoreImplClass * Metastore implementation Class name. */ public void setMetastoreImplClass(String metastoreImplClass) { setProperty(METASTORE_IMPL_CLASS, metastoreImplClass); } /** * Get max entries per scan in metastore. * * @return max entries per scan in metastore. */ public int getMetastoreMaxEntriesPerScan() { return getInt(METASTORE_MAX_ENTRIES_PER_SCAN, 50); } /** * Set max entries per scan in metastore. * * @param maxEntries * Max entries per scan in metastore. */ public void setMetastoreMaxEntriesPerScan(int maxEntries) { setProperty(METASTORE_MAX_ENTRIES_PER_SCAN, maxEntries); } } ClientConfiguration.java000066400000000000000000000317341244507361200352040ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/conf/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.conf; import static com.google.common.base.Charsets.UTF_8; import java.util.List; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.commons.lang.StringUtils; /** * Configuration settings for client side */ public class ClientConfiguration extends AbstractConfiguration { // Zookeeper Parameters protected final static String ZK_TIMEOUT = "zkTimeout"; protected final static String ZK_SERVERS = "zkServers"; // Throttle value protected final static String THROTTLE = "throttle"; // Digest Type protected final static String DIGEST_TYPE = "digestType"; // Passwd protected final static String PASSWD = "passwd"; // NIO Parameters protected final static String CLIENT_TCP_NODELAY = "clientTcpNoDelay"; protected final static String READ_TIMEOUT = "readTimeout"; protected final static String SPECULATIVE_READ_TIMEOUT = "speculativeReadTimeout"; // Timeout Setting protected final static String ADD_ENTRY_TIMEOUT_SEC = "addEntryTimeoutSec"; protected final static String READ_ENTRY_TIMEOUT_SEC = "readEntryTimeoutSec"; protected final static String TIMEOUT_TASK_INTERVAL_MILLIS = "timeoutTaskIntervalMillis"; // Number Woker Threads protected final static String NUM_WORKER_THREADS = "numWorkerThreads"; /** * Construct a default client-side configuration */ public ClientConfiguration() { super(); } /** * Construct a client-side configuration using a base configuration * * @param conf * Base configuration */ public ClientConfiguration(AbstractConfiguration conf) { super(); loadConf(conf); } /** * Get throttle value * * @return throttle value * @see #setThrottleValue */ public int getThrottleValue() { return this.getInt(THROTTLE, 5000); } /** * Set throttle value. * * Since BookKeeper process requests in asynchrous way, it will holds * those pending request in queue. You may easily run it out of memory * if producing too many requests than the capability of bookie servers can handle. * To prevent that from happeding, you can set a throttle value here. * * @param throttle * Throttle Value * @return client configuration */ public ClientConfiguration setThrottleValue(int throttle) { this.setProperty(THROTTLE, Integer.toString(throttle)); return this; } /** * Get digest type used in bookkeeper admin * * @return digest type * @see #setBookieRecoveryDigestType */ public DigestType getBookieRecoveryDigestType() { return DigestType.valueOf(this.getString(DIGEST_TYPE, DigestType.CRC32.toString())); } /** * Set digest type used in bookkeeper admin. * * Digest Type and Passwd used to open ledgers for admin tool * For now, assume that all ledgers were created with the same DigestType * and password. In the future, this admin tool will need to know for each * ledger, what was the DigestType and password used to create it before it * can open it. These values will come from System properties, though fixed * defaults are defined here. * * @param digestType * Digest Type * @return client configuration */ public ClientConfiguration setBookieRecoveryDigestType(DigestType digestType) { this.setProperty(DIGEST_TYPE, digestType.toString()); return this; } /** * Get passwd used in bookkeeper admin * * @return password * @see #setBookieRecoveryPasswd */ public byte[] getBookieRecoveryPasswd() { return this.getString(PASSWD, "").getBytes(); } /** * Set passwd used in bookkeeper admin. * * Digest Type and Passwd used to open ledgers for admin tool * For now, assume that all ledgers were created with the same DigestType * and password. In the future, this admin tool will need to know for each * ledger, what was the DigestType and password used to create it before it * can open it. These values will come from System properties, though fixed * defaults are defined here. * * @param passwd * Password * @return client configuration */ public ClientConfiguration setBookieRecoveryPasswd(byte[] passwd) { setProperty(PASSWD, new String(passwd)); return this; } /** * Is tcp connection no delay. * * @return tcp socket nodelay setting * @see #setClientTcpNoDelay */ public boolean getClientTcpNoDelay() { return getBoolean(CLIENT_TCP_NODELAY, true); } /** * Set socket nodelay setting. * * This settings is used to enabled/disabled Nagle's algorithm, which is a means of * improving the efficiency of TCP/IP networks by reducing the number of packets * that need to be sent over the network. If you are sending many small messages, * such that more than one can fit in a single IP packet, setting client.tcpnodelay * to false to enable Nagle algorithm can provide better performance. *
* Default value is true. * * @param noDelay * NoDelay setting * @return client configuration */ public ClientConfiguration setClientTcpNoDelay(boolean noDelay) { setProperty(CLIENT_TCP_NODELAY, Boolean.toString(noDelay)); return this; } /** * Get zookeeper servers to connect * * @return zookeeper servers */ public String getZkServers() { List servers = getList(ZK_SERVERS, null); if (null == servers || 0 == servers.size()) { return "localhost"; } return StringUtils.join(servers, ","); } /** * Set zookeeper servers to connect * * @param zkServers * ZooKeeper servers to connect */ public ClientConfiguration setZkServers(String zkServers) { setProperty(ZK_SERVERS, zkServers); return this; } /** * Get zookeeper timeout * * @return zookeeper client timeout */ public int getZkTimeout() { return getInt(ZK_TIMEOUT, 10000); } /** * Set zookeeper timeout * * @param zkTimeout * ZooKeeper client timeout * @return client configuration */ public ClientConfiguration setZkTimeout(int zkTimeout) { setProperty(ZK_TIMEOUT, Integer.toString(zkTimeout)); return this; } /** * Get the socket read timeout. This is the number of * seconds we wait without hearing a response from a bookie * before we consider it failed. * * The default is 5 seconds. * * @return the current read timeout in seconds * @deprecated use {@link getReadEntryTimeout()} or {@link getAddEntryTimeout()} instead */ @Deprecated public int getReadTimeout() { return getInt(READ_TIMEOUT, 5); } /** * Set the socket read timeout. * @see #getReadTimeout() * @param timeout The new read timeout in seconds * @return client configuration * @deprecated use {@link setReadEntryTimeout(int)} or {@link setAddEntryTimeout(int)} instead */ @Deprecated public ClientConfiguration setReadTimeout(int timeout) { setProperty(READ_TIMEOUT, Integer.toString(timeout)); return this; } /** * Get the timeout for add request. This is the number of seconds we wait without hearing * a response for add request from a bookie before we consider it failed. * * The default value is 5 second for backwards compatibility. * * @return add entry timeout. */ @SuppressWarnings("deprecation") public int getAddEntryTimeout() { return getInt(ADD_ENTRY_TIMEOUT_SEC, getReadTimeout()); } /** * Set timeout for add entry request. * @see #getAddEntryTimeout() * * @param timeout * The new add entry timeout in seconds. * @return client configuration. */ public ClientConfiguration setAddEntryTimeout(int timeout) { setProperty(ADD_ENTRY_TIMEOUT_SEC, timeout); return this; } /** * Get the timeout for read entry. This is the number of seconds we wait without hearing * a response for read entry request from a bookie before we consider it failed. By default, * we use socket timeout specified at {@link #getReadTimeout()}. * * @return read entry timeout. */ @SuppressWarnings("deprecation") public int getReadEntryTimeout() { return getInt(READ_ENTRY_TIMEOUT_SEC, getReadTimeout()); } /** * Set the timeout for read entry request. * @see #getReadEntryTimeout() * * @param timeout * The new read entry timeout in seconds. * @return client configuration. */ public ClientConfiguration setReadEntryTimeout(int timeout) { setProperty(READ_ENTRY_TIMEOUT_SEC, timeout); return this; } /** * Get the interval between successive executions of the PerChannelBookieClient's * TimeoutTask. This value is in milliseconds. Every X milliseconds, the timeout task * will be executed and it will error out entries that have timed out. * * We do it more aggressive to not accumulate pending requests due to slow responses. * @return */ public long getTimeoutTaskIntervalMillis() { return getLong(TIMEOUT_TASK_INTERVAL_MILLIS, TimeUnit.SECONDS.toMillis(Math.min(getAddEntryTimeout(), getReadEntryTimeout()))); } public ClientConfiguration setTimeoutTaskIntervalMillis(long timeoutMillis) { setProperty(TIMEOUT_TASK_INTERVAL_MILLIS, Long.toString(timeoutMillis)); return this; } /** * Get the number of worker threads. This is the number of * worker threads used by bookkeeper client to submit operations. * * @return the number of worker threads */ public int getNumWorkerThreads() { return getInt(NUM_WORKER_THREADS, Runtime.getRuntime().availableProcessors()); } /** * Set the number of worker threads. * *

* NOTE: setting the number of worker threads after BookKeeper object is constructed * will not take any effect on the number of threads in the pool. *

* * @see #getNumWorkerThreads() * @param numThreads number of worker threads used for bookkeeper * @return client configuration */ public ClientConfiguration setNumWorkerThreads(int numThreads) { setProperty(NUM_WORKER_THREADS, numThreads); return this; } /** * Get the period of time after which a speculative entry read should be triggered. * A speculative entry read is sent to the next replica bookie before * an error or response has been received for the previous entry read request. * * A speculative entry read is only sent if we have not heard from the current * replica bookie during the entire read operation which may comprise of many entries. * * Speculative reads allow the client to avoid having to wait for the connect timeout * in the case that a bookie has failed. It induces higher load on the network and on * bookies. This should be taken into account before changing this configuration value. * * @see org.apache.bookkeeper.client.LedgerHandle#asyncReadEntries * @return the speculative read timeout in milliseconds. Default 2000. */ public int getSpeculativeReadTimeout() { return getInt(SPECULATIVE_READ_TIMEOUT, 2000); } /** * Set the speculative read timeout. A lower timeout will reduce read latency in the * case of a failed bookie, while increasing the load on bookies and the network. * * The default is 2000 milliseconds. A value of 0 will disable speculative reads * completely. * * @see #getSpeculativeReadTimeout() * @param timeout the timeout value, in milliseconds * @return client configuration */ public ClientConfiguration setSpeculativeReadTimeout(int timeout) { setProperty(SPECULATIVE_READ_TIMEOUT, timeout); return this; } } ServerConfiguration.java000066400000000000000000000624651244507361200352410ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/conf/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.conf; import java.io.File; import java.util.List; import org.apache.commons.lang.StringUtils; /** * Configuration manages server-side settings */ public class ServerConfiguration extends AbstractConfiguration { // Entry Log Parameters protected final static String ENTRY_LOG_SIZE_LIMIT = "logSizeLimit"; protected final static String MINOR_COMPACTION_INTERVAL = "minorCompactionInterval"; protected final static String MINOR_COMPACTION_THRESHOLD = "minorCompactionThreshold"; protected final static String MAJOR_COMPACTION_INTERVAL = "majorCompactionInterval"; protected final static String MAJOR_COMPACTION_THRESHOLD = "majorCompactionThreshold"; protected final static String COMPACTION_MAX_OUTSTANDING_REQUESTS = "compactionMaxOutstandingRequests"; protected final static String COMPACTION_RATE = "compactionRate"; // Gc Parameters protected final static String GC_WAIT_TIME = "gcWaitTime"; // Sync Parameters protected final static String FLUSH_INTERVAL = "flushInterval"; // Bookie death watch interval protected final static String DEATH_WATCH_INTERVAL = "bookieDeathWatchInterval"; // Ledger Cache Parameters protected final static String OPEN_FILE_LIMIT = "openFileLimit"; protected final static String PAGE_LIMIT = "pageLimit"; protected final static String PAGE_SIZE = "pageSize"; // Journal Parameters protected final static String MAX_JOURNAL_SIZE = "journalMaxSizeMB"; protected final static String MAX_BACKUP_JOURNALS = "journalMaxBackups"; // Bookie Parameters protected final static String BOOKIE_PORT = "bookiePort"; protected final static String LISTENING_INTERFACE = "listeningInterface"; protected final static String ALLOW_LOOPBACK = "allowLoopback"; protected final static String JOURNAL_DIR = "journalDirectory"; protected final static String LEDGER_DIRS = "ledgerDirectories"; // NIO Parameters protected final static String SERVER_TCP_NODELAY = "serverTcpNoDelay"; // Zookeeper Parameters protected final static String ZK_TIMEOUT = "zkTimeout"; protected final static String ZK_SERVERS = "zkServers"; // Statistics Parameters protected final static String ENABLE_STATISTICS = "enableStatistics"; protected final static String OPEN_LEDGER_REREPLICATION_GRACE_PERIOD = "openLedgerRereplicationGracePeriod"; //ReadOnly mode support on all disk full protected final static String READ_ONLY_MODE_ENABLED = "readOnlyModeEnabled"; //Disk utilization protected final static String DISK_USAGE_THRESHOLD = "diskUsageThreshold"; protected final static String DISK_CHECK_INTERVAL = "diskCheckInterval"; protected final static String AUDITOR_PERIODIC_CHECK_INTERVAL = "auditorPeriodicCheckInterval"; protected final static String AUDITOR_PERIODIC_BOOKIE_CHECK_INTERVAL = "auditorPeriodicBookieCheckInterval"; protected final static String AUTO_RECOVERY_DAEMON_ENABLED = "autoRecoveryDaemonEnabled"; /** * Construct a default configuration object */ public ServerConfiguration() { super(); } /** * Construct a configuration based on other configuration * * @param conf * Other configuration */ public ServerConfiguration(AbstractConfiguration conf) { super(); loadConf(conf); } /** * Get entry logger size limitation * * @return entry logger size limitation */ public long getEntryLogSizeLimit() { return this.getLong(ENTRY_LOG_SIZE_LIMIT, 2 * 1024 * 1024 * 1024L); } /** * Set entry logger size limitation * * @param logSizeLimit * new log size limitation */ public ServerConfiguration setEntryLogSizeLimit(long logSizeLimit) { this.setProperty(ENTRY_LOG_SIZE_LIMIT, Long.toString(logSizeLimit)); return this; } /** * Get Garbage collection wait time * * @return gc wait time */ public long getGcWaitTime() { return this.getLong(GC_WAIT_TIME, 1000); } /** * Set garbage collection wait time * * @param gcWaitTime * gc wait time * @return server configuration */ public ServerConfiguration setGcWaitTime(long gcWaitTime) { this.setProperty(GC_WAIT_TIME, Long.toString(gcWaitTime)); return this; } /** * Get flush interval * * @return flush interval */ public int getFlushInterval() { return this.getInt(FLUSH_INTERVAL, 100); } /** * Set flush interval * * @param flushInterval * Flush Interval * @return server configuration */ public ServerConfiguration setFlushInterval(int flushInterval) { this.setProperty(FLUSH_INTERVAL, Integer.toString(flushInterval)); return this; } /** * Get bookie death watch interval * * @return watch interval */ public int getDeathWatchInterval() { return this.getInt(DEATH_WATCH_INTERVAL, 1000); } /** * Get open file limit * * @return max number of files to open */ public int getOpenFileLimit() { return this.getInt(OPEN_FILE_LIMIT, 900); } /** * Set limitation of number of open files. * * @param fileLimit * Limitation of number of open files. * @return server configuration */ public ServerConfiguration setOpenFileLimit(int fileLimit) { setProperty(OPEN_FILE_LIMIT, fileLimit); return this; } /** * Get limitation number of index pages in ledger cache * * @return max number of index pages in ledger cache */ public int getPageLimit() { return this.getInt(PAGE_LIMIT, -1); } /** * Set limitation number of index pages in ledger cache. * * @param pageLimit * Limitation of number of index pages in ledger cache. * @return server configuration */ public ServerConfiguration setPageLimit(int pageLimit) { this.setProperty(PAGE_LIMIT, pageLimit); return this; } /** * Get page size * * @return page size in ledger cache */ public int getPageSize() { return this.getInt(PAGE_SIZE, 8192); } /** * Set page size * * @see #getPageSize() * * @param pageSize * Page Size * @return Server Configuration */ public ServerConfiguration setPageSize(int pageSize) { this.setProperty(PAGE_SIZE, pageSize); return this; } /** * Max journal file size * * @return max journal file size */ public long getMaxJournalSize() { return this.getLong(MAX_JOURNAL_SIZE, 2 * 1024); } /** * Set new max journal file size * * @param maxJournalSize * new max journal file size * @return server configuration */ public ServerConfiguration setMaxJournalSize(long maxJournalSize) { this.setProperty(MAX_JOURNAL_SIZE, Long.toString(maxJournalSize)); return this; } /** * Max number of older journal files kept * * @return max number of older journal files to kept */ public int getMaxBackupJournals() { return this.getInt(MAX_BACKUP_JOURNALS, 5); } /** * Set max number of older journal files to kept * * @param maxBackupJournals * Max number of older journal files * @return server configuration */ public ServerConfiguration setMaxBackupJournals(int maxBackupJournals) { this.setProperty(MAX_BACKUP_JOURNALS, Integer.toString(maxBackupJournals)); return this; } /** * Get bookie port that bookie server listen on * * @return bookie port */ public int getBookiePort() { return this.getInt(BOOKIE_PORT, 3181); } /** * Set new bookie port that bookie server listen on * * @param port * Port to listen on * @return server configuration */ public ServerConfiguration setBookiePort(int port) { this.setProperty(BOOKIE_PORT, Integer.toString(port)); return this; } /** * Get the network interface that the bookie should * listen for connections on. If this is null, then the bookie * will listen for connections on all interfaces. * * @return the network interface to listen on, e.g. eth0, or * null if none is specified */ public String getListeningInterface() { return this.getString(LISTENING_INTERFACE); } /** * Set the network interface that the bookie should listen on. * If not set, the bookie will listen on all interfaces. * * @param iface the interface to listen on */ public ServerConfiguration setListeningInterface(String iface) { this.setProperty(LISTENING_INTERFACE, iface); return this; } /** * Is the bookie allowed to use a loopback interface as its primary * interface(i.e. the interface it uses to establish its identity)? * * By default, loopback interfaces are not allowed as the primary * interface. * * Using a loopback interface as the primary interface usually indicates * a configuration error. For example, its fairly common in some VPS setups * to not configure a hostname, or to have the hostname resolve to * 127.0.0.1. If this is the case, then all bookies in the cluster will * establish their identities as 127.0.0.1:3181, and only one will be able * to join the cluster. For VPSs configured like this, you should explicitly * set the listening interface. * * @see #setListeningInterface(String) * @return whether a loopback interface can be used as the primary interface */ public boolean getAllowLoopback() { return this.getBoolean(ALLOW_LOOPBACK, false); } /** * Configure the bookie to allow loopback interfaces to be used * as the primary bookie interface. * * @see #getAllowLoopback * @param allow whether to allow loopback interfaces * @return server configuration */ public ServerConfiguration setAllowLoopback(boolean allow) { this.setProperty(ALLOW_LOOPBACK, allow); return this; } /** * Get dir name to store journal files * * @return journal dir name */ public String getJournalDirName() { return this.getString(JOURNAL_DIR, "/tmp/bk-txn"); } /** * Set dir name to store journal files * * @param journalDir * Dir to store journal files * @return server configuration */ public ServerConfiguration setJournalDirName(String journalDir) { this.setProperty(JOURNAL_DIR, journalDir); return this; } /** * Get dir to store journal files * * @return journal dir, if no journal dir provided return null */ public File getJournalDir() { String journalDirName = getJournalDirName(); if (null == journalDirName) { return null; } return new File(journalDirName); } /** * Get dir names to store ledger data * * @return ledger dir names, if not provided return null */ public String[] getLedgerDirNames() { String[] ledgerDirs = this.getStringArray(LEDGER_DIRS); if (null == ledgerDirs) { return new String[] { "/tmp/bk-data" }; } return ledgerDirs; } /** * Set dir names to store ledger data * * @param ledgerDirs * Dir names to store ledger data * @return server configuration */ public ServerConfiguration setLedgerDirNames(String[] ledgerDirs) { if (null == ledgerDirs) { return this; } this.setProperty(LEDGER_DIRS, ledgerDirs); return this; } /** * Get dirs that stores ledger data * * @return ledger dirs */ public File[] getLedgerDirs() { String[] ledgerDirNames = getLedgerDirNames(); if (null == ledgerDirNames) { return null; } File[] ledgerDirs = new File[ledgerDirNames.length]; for (int i = 0; i < ledgerDirNames.length; i++) { ledgerDirs[i] = new File(ledgerDirNames[i]); } return ledgerDirs; } /** * Is tcp connection no delay. * * @return tcp socket nodelay setting */ public boolean getServerTcpNoDelay() { return getBoolean(SERVER_TCP_NODELAY, true); } /** * Set socket nodelay setting * * @param noDelay * NoDelay setting * @return server configuration */ public ServerConfiguration setServerTcpNoDelay(boolean noDelay) { setProperty(SERVER_TCP_NODELAY, Boolean.toString(noDelay)); return this; } /** * Get zookeeper servers to connect * * @return zookeeper servers */ public String getZkServers() { List servers = getList(ZK_SERVERS, null); if (null == servers || 0 == servers.size()) { return null; } return StringUtils.join(servers, ","); } /** * Set zookeeper servers to connect * * @param zkServers * ZooKeeper servers to connect */ public ServerConfiguration setZkServers(String zkServers) { setProperty(ZK_SERVERS, zkServers); return this; } /** * Get zookeeper timeout * * @return zookeeper server timeout */ public int getZkTimeout() { return getInt(ZK_TIMEOUT, 10000); } /** * Set zookeeper timeout * * @param zkTimeout * ZooKeeper server timeout * @return server configuration */ public ServerConfiguration setZkTimeout(int zkTimeout) { setProperty(ZK_TIMEOUT, Integer.toString(zkTimeout)); return this; } /** * Is statistics enabled * * @return is statistics enabled */ public boolean isStatisticsEnabled() { return getBoolean(ENABLE_STATISTICS, true); } /** * Turn on/off statistics * * @param enabled * Whether statistics enabled or not. * @return server configuration */ public ServerConfiguration setStatisticsEnabled(boolean enabled) { setProperty(ENABLE_STATISTICS, Boolean.toString(enabled)); return this; } /** * Get threshold of minor compaction. * * For those entry log files whose remaining size percentage reaches below * this threshold will be compacted in a minor compaction. * * If it is set to less than zero, the minor compaction is disabled. * * @return threshold of minor compaction */ public double getMinorCompactionThreshold() { return getDouble(MINOR_COMPACTION_THRESHOLD, 0.2f); } /** * Set threshold of minor compaction * * @see #getMinorCompactionThreshold() * * @param threshold * Threshold for minor compaction * @return server configuration */ public ServerConfiguration setMinorCompactionThreshold(double threshold) { setProperty(MINOR_COMPACTION_THRESHOLD, threshold); return this; } /** * Get threshold of major compaction. * * For those entry log files whose remaining size percentage reaches below * this threshold will be compacted in a major compaction. * * If it is set to less than zero, the major compaction is disabled. * * @return threshold of major compaction */ public double getMajorCompactionThreshold() { return getDouble(MAJOR_COMPACTION_THRESHOLD, 0.8f); } /** * Set threshold of major compaction. * * @see #getMajorCompactionThreshold() * * @param threshold * Threshold of major compaction * @return server configuration */ public ServerConfiguration setMajorCompactionThreshold(double threshold) { setProperty(MAJOR_COMPACTION_THRESHOLD, threshold); return this; } /** * Get interval to run minor compaction, in seconds. * * If it is set to less than zero, the minor compaction is disabled. * * @return threshold of minor compaction */ public long getMinorCompactionInterval() { return getLong(MINOR_COMPACTION_INTERVAL, 3600); } /** * Set interval to run minor compaction * * @see #getMinorCompactionInterval() * * @param interval * Interval to run minor compaction * @return server configuration */ public ServerConfiguration setMinorCompactionInterval(long interval) { setProperty(MINOR_COMPACTION_INTERVAL, interval); return this; } /** * Get interval to run major compaction, in seconds. * * If it is set to less than zero, the major compaction is disabled. * * @return high water mark */ public long getMajorCompactionInterval() { return getLong(MAJOR_COMPACTION_INTERVAL, 86400); } /** * Set interval to run major compaction. * * @see #getMajorCompactionInterval() * * @param interval * Interval to run major compaction * @return server configuration */ public ServerConfiguration setMajorCompactionInterval(long interval) { setProperty(MAJOR_COMPACTION_INTERVAL, interval); return this; } /** * Set the grace period which the rereplication worker will wait before * fencing and rereplicating a ledger fragment which is still being written * to, on bookie failure. * * The grace period allows the writer to detect the bookie failure, and * start replicating the ledger fragment. If the writer writes nothing * during the grace period, the rereplication worker assumes that it has * crashed and fences the ledger, preventing any further writes to that * ledger. * * @see org.apache.bookkeeper.client.BookKeeper#openLedger * * @param waitTime time to wait before replicating ledger fragment */ public void setOpenLedgerRereplicationGracePeriod(String waitTime) { setProperty(OPEN_LEDGER_REREPLICATION_GRACE_PERIOD, waitTime); } /** * Get the grace period which the rereplication worker to wait before * fencing and rereplicating a ledger fragment which is still being written * to, on bookie failure. * * @return long */ public long getOpenLedgerRereplicationGracePeriod() { return getLong(OPEN_LEDGER_REREPLICATION_GRACE_PERIOD, 30000); } /** * Set whether the bookie is able to go into read-only mode. * If this is set to false, the bookie will shutdown on encountering * an error condition. * * @param enabled whether to enable read-only mode. * * @return ServerConfiguration */ public ServerConfiguration setReadOnlyModeEnabled(boolean enabled) { setProperty(READ_ONLY_MODE_ENABLED, enabled); return this; } /** * Get whether read-only mode is enabled. The default is false. * * @return boolean */ public boolean isReadOnlyModeEnabled() { return getBoolean(READ_ONLY_MODE_ENABLED, false); } /** * Set the Disk free space threshold as a fraction of the total * after which disk will be considered as full during disk check. * * @param threshold threshold to declare a disk full * * @return ServerConfiguration */ public ServerConfiguration setDiskUsageThreshold(float threshold) { setProperty(DISK_USAGE_THRESHOLD, threshold); return this; } /** * Returns disk free space threshold. By default it is 0.95. * * @return float */ public float getDiskUsageThreshold() { return getFloat(DISK_USAGE_THRESHOLD, 0.95f); } /** * Set the disk checker interval to monitor ledger disk space * * @param interval interval between disk checks for space. * * @return ServerConfiguration */ public ServerConfiguration setDiskCheckInterval(int interval) { setProperty(DISK_CHECK_INTERVAL, interval); return this; } /** * Get the disk checker interval * * @return int */ public int getDiskCheckInterval() { return getInt(DISK_CHECK_INTERVAL, 10 * 1000); } /** * Set the regularity at which the auditor will run a check * of all ledgers. This should not be run very often, and at most, * once a day. Setting this to 0 will completely disable the periodic * check. * * @param interval The interval in seconds. e.g. 86400 = 1 day, 604800 = 1 week */ public void setAuditorPeriodicCheckInterval(long interval) { setProperty(AUDITOR_PERIODIC_CHECK_INTERVAL, interval); } /** * Get the regularity at which the auditor checks all ledgers. * @return The interval in seconds. Default is 604800 (1 week). */ public long getAuditorPeriodicCheckInterval() { return getLong(AUDITOR_PERIODIC_CHECK_INTERVAL, 604800); } /** * Set the interval between auditor bookie checks. * The auditor bookie check, checks ledger metadata to see which bookies * contain entries for each ledger. If a bookie which should contain entries * is unavailable, then the ledger containing that entry is marked for recovery. * Setting this to 0 disabled the periodic check. Bookie checks will still * run when a bookie fails. * * @param interval The period in seconds. */ public void setAuditorPeriodicBookieCheckInterval(long interval) { setProperty(AUDITOR_PERIODIC_BOOKIE_CHECK_INTERVAL, interval); } /** * Get the interval between auditor bookie check runs. * @see #setAuditorPeriodicBookieCheckInterval(long) * @return the interval between bookie check runs, in seconds. Default is 86400 (= 1 day) */ public long getAuditorPeriodicBookieCheckInterval() { return getLong(AUDITOR_PERIODIC_BOOKIE_CHECK_INTERVAL, 86400); } /** * Sets that whether the auto-recovery service can start along with Bookie * server itself or not * * @param enabled * - true if need to start auto-recovery service. Otherwise * false. * @return ServerConfiguration */ public ServerConfiguration setAutoRecoveryDaemonEnabled(boolean enabled) { setProperty(AUTO_RECOVERY_DAEMON_ENABLED, enabled); return this; } /** * Get whether the Bookie itself can start auto-recovery service also or not * * @return true - if Bookie should start auto-recovery service along with * it. false otherwise. */ public boolean isAutoRecoveryDaemonEnabled() { return getBoolean(AUTO_RECOVERY_DAEMON_ENABLED, false); } /** * Get the maximum number of entries which can be compacted without flushing. * Default is 100,000. * * @return the maximum number of unflushed entries */ public int getCompactionMaxOutstandingRequests() { return getInt(COMPACTION_MAX_OUTSTANDING_REQUESTS, 100000); } /** * Set the maximum number of entries which can be compacted without flushing. * * When compacting, the entries are written to the entrylog and the new offsets * are cached in memory. Once the entrylog is flushed the index is updated with * the new offsets. This parameter controls the number of entries added to the * entrylog before a flush is forced. A higher value for this parameter means * more memory will be used for offsets. Each offset consists of 3 longs. * * This parameter should _not_ be modified unless you know what you're doing. * The default is 100,000. * * @param maxOutstandingRequests number of entries to compact before flushing * * @return ServerConfiguration */ public ServerConfiguration setCompactionMaxOutstandingRequests(int maxOutstandingRequests) { setProperty(COMPACTION_MAX_OUTSTANDING_REQUESTS, maxOutstandingRequests); return this; } /** * Get the rate of compaction adds. Default is 1,000. * * @return rate of compaction (adds per second) */ public int getCompactionRate() { return getInt(COMPACTION_RATE, 1000); } /** * Set the rate of compaction adds. * * @param rate rate of compaction adds (adds per second) * * @return ServerConfiguration */ public ServerConfiguration setCompactionRate(int rate) { setProperty(COMPACTION_RATE, rate); return this; } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/jmx/000077500000000000000000000000001244507361200303135ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/jmx/BKMBeanInfo.java000066400000000000000000000017331244507361200331750ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.jmx; import org.apache.zookeeper.jmx.ZKMBeanInfo; /** * BookKeeper MBean info interface. */ public interface BKMBeanInfo extends ZKMBeanInfo { } BKMBeanRegistry.java000066400000000000000000000063261244507361200340360ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/jmx/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.jmx; import javax.management.MalformedObjectNameException; import javax.management.ObjectName; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.jmx.MBeanRegistry; import org.apache.zookeeper.jmx.ZKMBeanInfo; /** * This class provides a unified interface for registering/unregistering of * bookkeeper MBeans with the platform MBean server. It builds a hierarchy of MBeans * where each MBean represented by a filesystem-like path. Eventually, this hierarchy * will be stored in the zookeeper data tree instance as a virtual data tree. */ public class BKMBeanRegistry extends MBeanRegistry { static final Logger LOG = LoggerFactory.getLogger(BKMBeanRegistry.class); static final String DOMAIN = "org.apache.BookKeeperService"; static BKMBeanRegistry instance=new BKMBeanRegistry(); public static BKMBeanRegistry getInstance(){ return instance; } protected String getDomainName() { return DOMAIN; } /** * This takes a path, such as /a/b/c, and converts it to * name0=a,name1=b,name2=c * * Copy from zookeeper MBeanRegistry since tokenize is private */ protected int tokenize(StringBuilder sb, String path, int index) { String[] tokens = path.split("/"); for (String s: tokens) { if (s.length()==0) continue; sb.append("name").append(index++).append("=").append(s).append(","); } return index; } /** * Builds an MBean path and creates an ObjectName instance using the path. * @param path MBean path * @param bean the MBean instance * @return ObjectName to be registered with the platform MBean server */ protected ObjectName makeObjectName(String path, ZKMBeanInfo bean) throws MalformedObjectNameException { if(path==null) return null; StringBuilder beanName = new StringBuilder(getDomainName() + ":"); int counter=0; counter=tokenize(beanName,path,counter); tokenize(beanName,bean.getName(),counter); beanName.deleteCharAt(beanName.length()-1); try { return new ObjectName(beanName.toString()); } catch (MalformedObjectNameException e) { LOG.warn("Invalid name \"" + beanName.toString() + "\" for class " + bean.getClass().toString()); throw e; } } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/000077500000000000000000000000001244507361200304435ustar00rootroot00000000000000AbstractZkLedgerManager.java000066400000000000000000000476161244507361200357330ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.meta; import java.io.IOException; import java.util.HashSet; import java.util.List; import java.util.NavigableSet; import java.util.Set; import java.util.TreeSet; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.LedgerMetadataListener; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.MultiCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.versioning.Version; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.AsyncCallback.DataCallback; import org.apache.zookeeper.AsyncCallback.StatCallback; import org.apache.zookeeper.AsyncCallback.VoidCallback; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.data.Stat; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.common.util.concurrent.ThreadFactoryBuilder; /** * Abstract ledger manager based on zookeeper, which provides common methods such as query zk nodes. */ abstract class AbstractZkLedgerManager implements LedgerManager, Watcher { static Logger LOG = LoggerFactory.getLogger(AbstractZkLedgerManager.class); static int ZK_CONNECT_BACKOFF_MS = 200; protected final AbstractConfiguration conf; protected final ZooKeeper zk; protected final String ledgerRootPath; // ledger metadata listeners protected final ConcurrentMap> listeners = new ConcurrentHashMap>(); // we use this to prevent long stack chains from building up in callbacks protected ScheduledExecutorService scheduler; protected class ReadLedgerMetadataTask implements Runnable, GenericCallback { final long ledgerId; ReadLedgerMetadataTask(long ledgerId) { this.ledgerId = ledgerId; } @Override public void run() { if (null != listeners.get(ledgerId)) { LOG.debug("Re-read ledger metadata for {}.", ledgerId); readLedgerMetadata(ledgerId, this, AbstractZkLedgerManager.this); } else { LOG.debug("Ledger metadata listener for ledger {} is already removed.", ledgerId); } } @Override public void operationComplete(int rc, final LedgerMetadata result) { if (BKException.Code.OK == rc) { final Set listenerSet = listeners.get(ledgerId); if (null != listenerSet) { LOG.debug("Ledger metadata is changed for {} : {}.", ledgerId, result); scheduler.submit(new Runnable() { @Override public void run() { synchronized(listenerSet) { for (LedgerMetadataListener listener : listenerSet) { listener.onChanged(ledgerId, result); } } } }); } } else if (BKException.Code.NoSuchLedgerExistsException == rc) { // the ledger is removed, do nothing Set listenerSet = listeners.remove(ledgerId); if (null != listenerSet) { LOG.debug("Removed ledger metadata listener set on ledger {} as its ledger is deleted : {}", ledgerId, listenerSet.size()); } } else { LOG.warn("Failed on read ledger metadata of ledger {} : {}", ledgerId, rc); scheduler.schedule(this, ZK_CONNECT_BACKOFF_MS, TimeUnit.MILLISECONDS); } } } /** * ZooKeeper-based Ledger Manager Constructor * * @param conf * Configuration object * @param zk * ZooKeeper Client Handle */ protected AbstractZkLedgerManager(AbstractConfiguration conf, ZooKeeper zk) { this.conf = conf; this.zk = zk; this.ledgerRootPath = conf.getZkLedgersRootPath(); this.scheduler = Executors.newSingleThreadScheduledExecutor(); ThreadFactoryBuilder tfb = new ThreadFactoryBuilder().setNameFormat( "ZkLedgerManagerScheduler-%d"); this.scheduler = Executors .newSingleThreadScheduledExecutor(tfb.build()); LOG.debug("Using AbstractZkLedgerManager with root path : {}", ledgerRootPath); } /** * Get the znode path that is used to store ledger metadata * * @param ledgerId * Ledger ID * @return ledger node path */ protected abstract String getLedgerPath(long ledgerId); /** * Get ledger id from its znode ledger path * * @param ledgerPath * Ledger path to store metadata * @return ledger id * @throws IOException when the ledger path is invalid */ protected abstract long getLedgerId(String ledgerPath) throws IOException; @Override public void process(WatchedEvent event) { LOG.info("Received watched event {} from zookeeper based ledger manager.", event); if (Event.EventType.None == event.getType()) { /** TODO: BOOKKEEPER-537 to handle expire events. if (Event.KeeperState.Expired == event.getState()) { LOG.info("ZooKeeper client expired on ledger manager."); Set keySet = new HashSet(listeners.keySet()); for (Long lid : keySet) { scheduler.submit(new ReadLedgerMetadataTask(lid)); LOG.info("Re-read ledger metadata for {} after zookeeper session expired.", lid); } } **/ return; } String path = event.getPath(); if (null == path) { return; } final long ledgerId; try { ledgerId = getLedgerId(event.getPath()); } catch (IOException ioe) { LOG.info("Received invalid ledger path {} : ", event.getPath(), ioe); return; } switch (event.getType()) { case NodeDeleted: Set listenerSet = listeners.get(ledgerId); if (null != listenerSet) { synchronized(listenerSet) { LOG.debug("Removed ledger metadata listeners on ledger {} : {}", ledgerId, listenerSet); for(LedgerMetadataListener l : listenerSet) { unregisterLedgerMetadataListener(ledgerId, l); l.onChanged( ledgerId, null ); } } } else { LOG.debug("No ledger metadata listeners to remove from ledger {} after it's deleted.", ledgerId); } break; case NodeDataChanged: new ReadLedgerMetadataTask(ledgerId).run(); break; default: LOG.debug("Received event {} on {}.", event.getType(), event.getPath()); break; } } /** * Removes ledger metadata from ZooKeeper if version matches. * * @param ledgerId ledger identifier * @param version local version of metadata znode * @param cb callback object */ @Override public void removeLedgerMetadata(final long ledgerId, final Version version, final GenericCallback cb) { int znodeVersion = -1; if (Version.NEW == version) { LOG.error("Request to delete ledger {} metadata with version set to the initial one", ledgerId); cb.operationComplete(BKException.Code.MetadataVersionException, (Void)null); return; } else if (Version.ANY != version) { if (!(version instanceof ZkVersion)) { LOG.info("Not an instance of ZKVersion: {}", ledgerId); cb.operationComplete(BKException.Code.MetadataVersionException, (Void)null); return; } else { znodeVersion = ((ZkVersion)version).getZnodeVersion(); } } zk.delete(getLedgerPath(ledgerId), znodeVersion, new VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { int bkRc; if (rc == KeeperException.Code.NONODE.intValue()) { LOG.warn("Ledger node does not exist in ZooKeeper: ledgerId={}", ledgerId); bkRc = BKException.Code.NoSuchLedgerExistsException; } else if (rc == KeeperException.Code.OK.intValue()) { // removed listener on ledgerId Set listenerSet = listeners.remove(ledgerId); if (null != listenerSet) { LOG.debug("Remove registered ledger metadata listeners on ledger {} after ledger is deleted.", ledgerId, listenerSet); } else { LOG.debug("No ledger metadata listeners to remove from ledger {} when it's being deleted.", ledgerId); } bkRc = BKException.Code.OK; } else { bkRc = BKException.Code.ZKException; } cb.operationComplete(bkRc, (Void)null); } }, null); } @Override public void registerLedgerMetadataListener(long ledgerId, LedgerMetadataListener listener) { if (null != listener) { LOG.info("Registered ledger metadata listener {} on ledger {}.", listener, ledgerId); Set listenerSet = listeners.get(ledgerId); if (listenerSet == null) { Set newListenerSet = new HashSet(); Set oldListenerSet = listeners.putIfAbsent(ledgerId, newListenerSet); if (null != oldListenerSet) { listenerSet = oldListenerSet; } else { listenerSet = newListenerSet; } } synchronized (listenerSet) { listenerSet.add(listener); } new ReadLedgerMetadataTask(ledgerId).run(); } } @Override public void unregisterLedgerMetadataListener(long ledgerId, LedgerMetadataListener listener) { Set listenerSet = listeners.get(ledgerId); if (listenerSet != null) { synchronized (listenerSet) { if (listenerSet.remove(listener)) { LOG.info("Unregistered ledger metadata listener {} on ledger {}.", listener, ledgerId); } if (listenerSet.isEmpty()) { listeners.remove(ledgerId, listenerSet); } } } } @Override public void readLedgerMetadata(final long ledgerId, final GenericCallback readCb) { readLedgerMetadata(ledgerId, readCb, null); } protected void readLedgerMetadata(final long ledgerId, final GenericCallback readCb, Watcher watcher) { zk.getData(getLedgerPath(ledgerId), watcher, new DataCallback() { @Override public void processResult(int rc, String path, Object ctx, byte[] data, Stat stat) { if (rc == KeeperException.Code.NONODE.intValue()) { if (LOG.isDebugEnabled()) { LOG.debug("No such ledger: " + ledgerId, KeeperException.create(KeeperException.Code.get(rc), path)); } readCb.operationComplete(BKException.Code.NoSuchLedgerExistsException, null); return; } if (rc != KeeperException.Code.OK.intValue()) { LOG.error("Could not read metadata for ledger: " + ledgerId, KeeperException.create(KeeperException.Code.get(rc), path)); readCb.operationComplete(BKException.Code.ZKException, null); return; } LedgerMetadata metadata; try { metadata = LedgerMetadata.parseConfig(data, new ZkVersion(stat.getVersion())); } catch (IOException e) { LOG.error("Could not parse ledger metadata for ledger: " + ledgerId, e); readCb.operationComplete(BKException.Code.ZKException, null); return; } readCb.operationComplete(BKException.Code.OK, metadata); } }, null); } @Override public void writeLedgerMetadata(final long ledgerId, final LedgerMetadata metadata, final GenericCallback cb) { Version v = metadata.getVersion(); if (Version.NEW == v || !(v instanceof ZkVersion)) { cb.operationComplete(BKException.Code.MetadataVersionException, null); return; } final ZkVersion zv = (ZkVersion) v; zk.setData(getLedgerPath(ledgerId), metadata.serialize(), zv.getZnodeVersion(), new StatCallback() { @Override public void processResult(int rc, String path, Object ctx, Stat stat) { if (KeeperException.Code.BadVersion == rc) { cb.operationComplete(BKException.Code.MetadataVersionException, null); } else if (KeeperException.Code.OK.intValue() == rc) { // update metadata version metadata.setVersion(zv.setZnodeVersion(stat.getVersion())); cb.operationComplete(BKException.Code.OK, null); } else { LOG.warn("Conditional update ledger metadata failed: ", KeeperException.Code.get(rc)); cb.operationComplete(BKException.Code.ZKException, null); } } }, null); } /** * Process ledgers in a single zk node. * *

* for each ledger found in this zk node, processor#process(ledgerId) will be triggerred * to process a specific ledger. after all ledgers has been processed, the finalCb will * be called with provided context object. The RC passed to finalCb is decided by : *

    *
  • All ledgers are processed successfully, successRc will be passed. *
  • Either ledger is processed failed, failureRc will be passed. *
*

* * @param path * Zk node path to store ledgers * @param processor * Processor provided to process ledger * @param finalCb * Callback object when all ledgers are processed * @param ctx * Context object passed to finalCb * @param successRc * RC passed to finalCb when all ledgers are processed successfully * @param failureRc * RC passed to finalCb when either ledger is processed failed */ protected void asyncProcessLedgersInSingleNode( final String path, final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object ctx, final int successRc, final int failureRc) { ZkUtils.getChildrenInSingleNode(zk, path, new GenericCallback>() { @Override public void operationComplete(int rc, List ledgerNodes) { if (Code.OK.intValue() != rc) { finalCb.processResult(failureRc, null, ctx); return; } Set zkActiveLedgers = ledgerListToSet(ledgerNodes, path); LOG.debug("Processing ledgers: {}", zkActiveLedgers); // no ledgers found, return directly if (zkActiveLedgers.size() == 0) { finalCb.processResult(successRc, null, ctx); return; } MultiCallback mcb = new MultiCallback(zkActiveLedgers.size(), finalCb, ctx, successRc, failureRc); // start loop over all ledgers for (Long ledger : zkActiveLedgers) { processor.process(ledger, mcb); } } }); } /** * Whether the znode a special znode * * @param znode * Znode Name * @return true if the znode is a special znode otherwise false */ protected boolean isSpecialZnode(String znode) { if (BookKeeperConstants.AVAILABLE_NODE.equals(znode) || BookKeeperConstants.COOKIE_NODE.equals(znode) || BookKeeperConstants.LAYOUT_ZNODE.equals(znode) || BookKeeperConstants.INSTANCEID.equals(znode) || BookKeeperConstants.UNDER_REPLICATION_NODE.equals(znode)) { return true; } return false; } /** * Convert the ZK retrieved ledger nodes to a HashSet for easier comparisons. * * @param ledgerNodes * zk ledger nodes * @param path * the prefix path of the ledger nodes * @return ledger id hash set */ protected NavigableSet ledgerListToSet(List ledgerNodes, String path) { NavigableSet zkActiveLedgers = new TreeSet(); for (String ledgerNode : ledgerNodes) { if (isSpecialZnode(ledgerNode)) { continue; } try { // convert the node path to ledger id according to different ledger manager implementation zkActiveLedgers.add(getLedgerId(path + "/" + ledgerNode)); } catch (IOException e) { LOG.warn("Error extracting ledgerId from ZK ledger node: " + ledgerNode); // This is a pretty bad error as it indicates a ledger node in ZK // has an incorrect format. For now just continue and consider // this as a non-existent ledger. continue; } } return zkActiveLedgers; } @Override public void close() { try { scheduler.shutdown(); } catch (Exception e) { LOG.warn("Error when closing zookeeper based ledger manager: ", e); } } } FlatLedgerManager.java000066400000000000000000000143041244507361200345350ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metapackage org.apache.bookkeeper.meta; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.IOException; import java.util.NoSuchElementException; import java.util.Set; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.util.ZkUtils; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.AsyncCallback.StringCallback; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Manage all ledgers in a single zk node. * *

* All ledgers' metadata are put in a single zk node, created using zk sequential node. * Each ledger node is prefixed with 'L'. *

*/ class FlatLedgerManager extends AbstractZkLedgerManager { static final Logger LOG = LoggerFactory.getLogger(FlatLedgerManager.class); // path prefix to store ledger znodes private final String ledgerPrefix; /** * Constructor * * @param conf * Configuration object * @param zk * ZooKeeper Client Handle * @param ledgerRootPath * ZooKeeper Path to store ledger metadata * @throws IOException when version is not compatible */ public FlatLedgerManager(AbstractConfiguration conf, ZooKeeper zk) { super(conf, zk); ledgerPrefix = ledgerRootPath + "/" + StringUtils.LEDGER_NODE_PREFIX; } @Override public void createLedger(final LedgerMetadata metadata, final GenericCallback cb) { StringCallback scb = new StringCallback() { @Override public void processResult(int rc, String path, Object ctx, String name) { if (Code.OK.intValue() != rc) { LOG.error("Could not create node for ledger", KeeperException.create(KeeperException.Code.get(rc), path)); cb.operationComplete(BKException.Code.ZKException, null); } else { // update znode status metadata.setVersion(new ZkVersion(0)); try { long ledgerId = getLedgerId(name); cb.operationComplete(BKException.Code.OK, ledgerId); } catch (IOException ie) { LOG.error("Could not extract ledger-id from path:" + name, ie); cb.operationComplete(BKException.Code.ZKException, null); } } } }; ZkUtils.asyncCreateFullPathOptimistic(zk, ledgerPrefix, metadata.serialize(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT_SEQUENTIAL, scb, null); } @Override public String getLedgerPath(long ledgerId) { StringBuilder sb = new StringBuilder(); sb.append(ledgerPrefix) .append(StringUtils.getZKStringId(ledgerId)); return sb.toString(); } @Override public long getLedgerId(String nodeName) throws IOException { long ledgerId; try { String parts[] = nodeName.split(ledgerPrefix); ledgerId = Long.parseLong(parts[parts.length - 1]); } catch (NumberFormatException e) { throw new IOException(e); } return ledgerId; } @Override public void asyncProcessLedgers(final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object ctx, final int successRc, final int failureRc) { asyncProcessLedgersInSingleNode(ledgerRootPath, processor, finalCb, ctx, successRc, failureRc); } @Override public LedgerRangeIterator getLedgerRanges() { return new LedgerRangeIterator() { // single iterator, can visit only one time boolean nextCalled = false; LedgerRange nextRange = null; synchronized private void preload() throws IOException { if (nextRange != null) { return; } Set zkActiveLedgers = null; try { zkActiveLedgers = ledgerListToSet( ZkUtils.getChildrenInSingleNode(zk, ledgerRootPath), ledgerRootPath); nextRange = new LedgerRange(zkActiveLedgers); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new IOException("Error when get child nodes from zk", ie); } } @Override synchronized public boolean hasNext() throws IOException { preload(); return nextRange != null && nextRange.size() > 0 && !nextCalled; } @Override synchronized public LedgerRange next() throws IOException { if (!hasNext()) { throw new NoSuchElementException(); } nextCalled = true; return nextRange; } }; } } FlatLedgerManagerFactory.java000066400000000000000000000062111244507361200360630ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metapackage org.apache.bookkeeper.meta; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.IOException; import java.util.List; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZKUtil; import org.apache.bookkeeper.replication.ReplicationException; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.zookeeper.ZooKeeper; /** * Flat Ledger Manager Factory */ public class FlatLedgerManagerFactory extends LedgerManagerFactory { public static final String NAME = "flat"; public static final int CUR_VERSION = 1; AbstractConfiguration conf; ZooKeeper zk; @Override public int getCurrentVersion() { return CUR_VERSION; } @Override public LedgerManagerFactory initialize(final AbstractConfiguration conf, final ZooKeeper zk, final int factoryVersion) throws IOException { if (CUR_VERSION != factoryVersion) { throw new IOException("Incompatible layout version found : " + factoryVersion); } this.conf = conf; this.zk = zk; return this; } @Override public void uninitialize() throws IOException { // since zookeeper instance is passed from outside // we don't need to close it here } @Override public LedgerManager newLedgerManager() { return new FlatLedgerManager(conf, zk); } @Override public LedgerUnderreplicationManager newLedgerUnderreplicationManager() throws KeeperException, InterruptedException, ReplicationException.CompatibilityException { return new ZkLedgerUnderreplicationManager(conf, zk); } @Override public void format(AbstractConfiguration conf, ZooKeeper zk) throws InterruptedException, KeeperException, IOException { FlatLedgerManager ledgerManager = (FlatLedgerManager) newLedgerManager(); String ledgersRootPath = conf.getZkLedgersRootPath(); List children = zk.getChildren(ledgersRootPath, false); for (String child : children) { if (ledgerManager.isSpecialZnode(child)) { continue; } ZKUtil.deleteRecursive(zk, ledgersRootPath + "/" + child); } // Delete and recreate the LAYOUT information. super.format(conf, zk); } } HierarchicalLedgerManager.java000066400000000000000000000500021244507361200362200ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.meta; import java.io.IOException; import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.NavigableSet; import java.util.NoSuchElementException; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.util.ZkUtils; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.AsyncCallback.StringCallback; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Hierarchical Ledger Manager which manages ledger meta in zookeeper using 2-level hierarchical znodes. * *

* Hierarchical Ledger Manager first obtain a global unique id from zookeeper using a EPHEMERAL_SEQUENTIAL * znode (ledgersRootPath)/ledgers/idgen/ID-. * Since zookeeper sequential counter has a format of %10d -- that is 10 digits with 0 (zero) padding, i.e. * "<path>0000000001", HierarchicalLedgerManager splits the generated id into 3 parts (2-4-4): *

<level1 (2 digits)><level2 (4 digits)><level3 (4 digits)>
* These 3 parts are used to form the actual ledger node path used to store ledger metadata: *
(ledgersRootPath)/level1/level2/L(level3)
* E.g Ledger 0000000001 is split into 3 parts 00, 0000, 0001, which is stored in * (ledgersRootPath)/00/0000/L0001. So each znode could have at most 10000 ledgers, which avoids * errors during garbage collection due to lists of children that are too long. */ class HierarchicalLedgerManager extends AbstractZkLedgerManager { static final Logger LOG = LoggerFactory.getLogger(HierarchicalLedgerManager.class); static final String IDGEN_ZNODE = "idgen"; static final String IDGENERATION_PREFIX = "/" + IDGEN_ZNODE + "/ID-"; private static final String MAX_ID_SUFFIX = "9999"; private static final String MIN_ID_SUFFIX = "0000"; // Path to generate global id private final String idGenPath; /** * Constructor * * @param conf * Configuration object * @param zk * ZooKeeper Client Handle */ public HierarchicalLedgerManager(AbstractConfiguration conf, ZooKeeper zk) { super(conf, zk); this.idGenPath = ledgerRootPath + IDGENERATION_PREFIX; LOG.debug("Using HierarchicalLedgerManager with root path : {}", ledgerRootPath); } @Override public void createLedger(final LedgerMetadata metadata, final GenericCallback ledgerCb) { ZkUtils.asyncCreateFullPathOptimistic(zk, idGenPath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL, new StringCallback() { @Override public void processResult(int rc, String path, Object ctx, final String idPathName) { if (rc != KeeperException.Code.OK.intValue()) { LOG.error("Could not generate new ledger id", KeeperException.create(KeeperException.Code.get(rc), path)); ledgerCb.operationComplete(BKException.Code.ZKException, null); return; } /* * Extract ledger id from gen path */ long ledgerId; try { ledgerId = getLedgerIdFromGenPath(idPathName); } catch (IOException e) { LOG.error("Could not extract ledger-id from id gen path:" + path, e); ledgerCb.operationComplete(BKException.Code.ZKException, null); return; } String ledgerPath = getLedgerPath(ledgerId); final long lid = ledgerId; StringCallback scb = new StringCallback() { @Override public void processResult(int rc, String path, Object ctx, String name) { if (rc != KeeperException.Code.OK.intValue()) { LOG.error("Could not create node for ledger", KeeperException.create(KeeperException.Code.get(rc), path)); ledgerCb.operationComplete(BKException.Code.ZKException, null); } else { // update version metadata.setVersion(new ZkVersion(0)); ledgerCb.operationComplete(BKException.Code.OK, lid); } } }; ZkUtils.asyncCreateFullPathOptimistic(zk, ledgerPath, metadata.serialize(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT, scb, null); // delete the znode for id generation scheduler.submit(new Runnable() { @Override public void run() { zk.delete(idPathName, -1, new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (rc != KeeperException.Code.OK.intValue()) { LOG.warn("Exception during deleting znode for id generation : ", KeeperException.create(KeeperException.Code.get(rc), path)); } else { LOG.debug("Deleting znode for id generation : {}", idPathName); } } }, null); } }); } }, null); } // get ledger id from generation path private long getLedgerIdFromGenPath(String nodeName) throws IOException { long ledgerId; try { String parts[] = nodeName.split(IDGENERATION_PREFIX); ledgerId = Long.parseLong(parts[parts.length - 1]); } catch (NumberFormatException e) { throw new IOException(e); } return ledgerId; } @Override public String getLedgerPath(long ledgerId) { return ledgerRootPath + StringUtils.getHierarchicalLedgerPath(ledgerId); } @Override public long getLedgerId(String pathName) throws IOException { if (!pathName.startsWith(ledgerRootPath)) { throw new IOException("it is not a valid hashed path name : " + pathName); } String hierarchicalPath = pathName.substring(ledgerRootPath.length() + 1); return StringUtils.stringToHierarchicalLedgerId(hierarchicalPath); } // get ledger from all level nodes private long getLedgerId(String...levelNodes) throws IOException { return StringUtils.stringToHierarchicalLedgerId(levelNodes); } // // Active Ledger Manager // /** * Get the smallest cache id in a specified node /level1/level2 * * @param level1 * 1st level node name * @param level2 * 2nd level node name * @return the smallest ledger id */ private long getStartLedgerIdByLevel(String level1, String level2) throws IOException { return getLedgerId(level1, level2, MIN_ID_SUFFIX); } /** * Get the largest cache id in a specified node /level1/level2 * * @param level1 * 1st level node name * @param level2 * 2nd level node name * @return the largest ledger id */ private long getEndLedgerIdByLevel(String level1, String level2) throws IOException { return getLedgerId(level1, level2, MAX_ID_SUFFIX); } @Override public void asyncProcessLedgers(final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object context, final int successRc, final int failureRc) { // process 1st level nodes asyncProcessLevelNodes(ledgerRootPath, new Processor() { @Override public void process(final String l1Node, final AsyncCallback.VoidCallback cb1) { if (isSpecialZnode(l1Node)) { cb1.processResult(successRc, null, context); return; } final String l1NodePath = ledgerRootPath + "/" + l1Node; // process level1 path, after all children of level1 process // it callback to continue processing next level1 node asyncProcessLevelNodes(l1NodePath, new Processor() { @Override public void process(String l2Node, AsyncCallback.VoidCallback cb2) { // process level1/level2 path String l2NodePath = ledgerRootPath + "/" + l1Node + "/" + l2Node; // process each ledger // after all ledger are processed, cb2 will be call to continue processing next level2 node asyncProcessLedgersInSingleNode(l2NodePath, processor, cb2, context, successRc, failureRc); } }, cb1, context, successRc, failureRc); } }, finalCb, context, successRc, failureRc); } /** * Process hash nodes in a given path */ private void asyncProcessLevelNodes( final String path, final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object context, final int successRc, final int failureRc) { zk.sync(path, new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (rc != Code.OK.intValue()) { LOG.error("Error syncing path " + path + " when getting its chidren: ", KeeperException.create(KeeperException.Code.get(rc), path)); finalCb.processResult(failureRc, null, context); return; } zk.getChildren(path, false, new AsyncCallback.ChildrenCallback() { @Override public void processResult(int rc, String path, Object ctx, List levelNodes) { if (rc != Code.OK.intValue()) { LOG.error("Error polling hash nodes of " + path, KeeperException.create(KeeperException.Code.get(rc), path)); finalCb.processResult(failureRc, null, context); return; } AsyncListProcessor listProcessor = new AsyncListProcessor(scheduler); // process its children listProcessor.process(levelNodes, processor, finalCb, context, successRc, failureRc); } }, null); } }, null); } /** * Process list one by one in asynchronize way. Process will be stopped immediately * when error occurred. */ private static class AsyncListProcessor { // use this to prevent long stack chains from building up in callbacks ScheduledExecutorService scheduler; /** * Constructor * * @param scheduler * Executor used to prevent long stack chains */ public AsyncListProcessor(ScheduledExecutorService scheduler) { this.scheduler = scheduler; } /** * Process list of items * * @param data * List of data to process * @param processor * Callback to process element of list when success * @param finalCb * Final callback to be called after all elements in the list are processed * @param contxt * Context of final callback * @param successRc * RC passed to final callback on success * @param failureRc * RC passed to final callback on failure */ public void process(final List data, final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object context, final int successRc, final int failureRc) { if (data == null || data.size() == 0) { finalCb.processResult(successRc, null, context); return; } final int size = data.size(); final AtomicInteger current = new AtomicInteger(0); AsyncCallback.VoidCallback stubCallback = new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (rc != successRc) { // terminal immediately finalCb.processResult(failureRc, null, context); return; } // process next element int next = current.incrementAndGet(); if (next >= size) { // reach the end of list finalCb.processResult(successRc, null, context); return; } final T dataToProcess = data.get(next); final AsyncCallback.VoidCallback stub = this; scheduler.submit(new Runnable() { @Override public final void run() { processor.process(dataToProcess, stub); } }); } }; T firstElement = data.get(0); processor.process(firstElement, stubCallback); } } @Override protected boolean isSpecialZnode(String znode) { return IDGEN_ZNODE.equals(znode) || super.isSpecialZnode(znode); } @Override public LedgerRangeIterator getLedgerRanges() { return new HierarchicalLedgerRangeIterator(); } /** * Iterator through each metadata bucket with hierarchical mode */ private class HierarchicalLedgerRangeIterator implements LedgerRangeIterator { private Iterator l1NodesIter = null; private Iterator l2NodesIter = null; private String curL1Nodes = ""; private boolean iteratorDone = false; private LedgerRange nextRange = null; /** * iterate next level1 znode * * @return false if have visited all level1 nodes * @throws InterruptedException/KeeperException if error occurs reading zookeeper children */ private boolean nextL1Node() throws KeeperException, InterruptedException { l2NodesIter = null; while (l2NodesIter == null) { if (l1NodesIter.hasNext()) { curL1Nodes = l1NodesIter.next(); } else { return false; } if (isSpecialZnode(curL1Nodes)) { continue; } List l2Nodes = zk.getChildren(ledgerRootPath + "/" + curL1Nodes, null); Collections.sort(l2Nodes); l2NodesIter = l2Nodes.iterator(); if (!l2NodesIter.hasNext()) { l2NodesIter = null; continue; } } return true; } synchronized private void preload() throws IOException { while (nextRange == null && !iteratorDone) { boolean hasMoreElements = false; try { if (l1NodesIter == null) { l1NodesIter = zk.getChildren(ledgerRootPath, null).iterator(); hasMoreElements = nextL1Node(); } else if (l2NodesIter == null || !l2NodesIter.hasNext()) { hasMoreElements = nextL1Node(); } else { hasMoreElements = true; } } catch (KeeperException ke) { throw new IOException("Error preloading next range", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new IOException("Interrupted while preloading", ie); } if (hasMoreElements) { nextRange = getLedgerRangeByLevel(curL1Nodes, l2NodesIter.next()); if (nextRange.size() == 0) { nextRange = null; } } else { iteratorDone = true; } } } @Override synchronized public boolean hasNext() throws IOException { preload(); return nextRange != null && !iteratorDone; } @Override synchronized public LedgerRange next() throws IOException { if (!hasNext()) { throw new NoSuchElementException(); } LedgerRange r = nextRange; nextRange = null; return r; } /** * Get a single node level1/level2 * * @param level1 * 1st level node name * @param level2 * 2nd level node name * @throws IOException */ LedgerRange getLedgerRangeByLevel(final String level1, final String level2) throws IOException { StringBuilder nodeBuilder = new StringBuilder(); nodeBuilder.append(ledgerRootPath).append("/") .append(level1).append("/").append(level2); String nodePath = nodeBuilder.toString(); List ledgerNodes = null; try { ledgerNodes = ZkUtils.getChildrenInSingleNode(zk, nodePath); } catch (InterruptedException e) { throw new IOException("Error when get child nodes from zk", e); } NavigableSet zkActiveLedgers = ledgerListToSet(ledgerNodes, nodePath); if (LOG.isDebugEnabled()) { LOG.debug("All active ledgers from ZK for hash node " + level1 + "/" + level2 + " : " + zkActiveLedgers); } return new LedgerRange(zkActiveLedgers.subSet(getStartLedgerIdByLevel(level1, level2), true, getEndLedgerIdByLevel(level1, level2), true)); } } } HierarchicalLedgerManagerFactory.java000066400000000000000000000064041244507361200375570ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metapackage org.apache.bookkeeper.meta; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.IOException; import java.util.List; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZKUtil; import org.apache.bookkeeper.replication.ReplicationException; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.zookeeper.ZooKeeper; /** * Hierarchical Ledger Manager Factory */ public class HierarchicalLedgerManagerFactory extends LedgerManagerFactory { public static final String NAME = "hierarchical"; public static final int CUR_VERSION = 1; AbstractConfiguration conf; ZooKeeper zk; @Override public int getCurrentVersion() { return CUR_VERSION; } @Override public LedgerManagerFactory initialize(final AbstractConfiguration conf, final ZooKeeper zk, final int factoryVersion) throws IOException { if (CUR_VERSION != factoryVersion) { throw new IOException("Incompatible layout version found : " + factoryVersion); } this.conf = conf; this.zk = zk; return this; } @Override public void uninitialize() throws IOException { // since zookeeper instance is passed from outside // we don't need to close it here } @Override public LedgerManager newLedgerManager() { return new HierarchicalLedgerManager(conf, zk); } @Override public LedgerUnderreplicationManager newLedgerUnderreplicationManager() throws KeeperException, InterruptedException, ReplicationException.CompatibilityException{ return new ZkLedgerUnderreplicationManager(conf, zk); } @Override public void format(AbstractConfiguration conf, ZooKeeper zk) throws InterruptedException, KeeperException, IOException { HierarchicalLedgerManager ledgerManager = (HierarchicalLedgerManager) newLedgerManager(); String ledgersRootPath = conf.getZkLedgersRootPath(); List children = zk.getChildren(ledgersRootPath, false); for (String child : children) { if (!HierarchicalLedgerManager.IDGEN_ZNODE.equals(child) && ledgerManager.isSpecialZnode(child)) { continue; } ZKUtil.deleteRecursive(zk, ledgersRootPath + "/" + child); } // Delete and recreate the LAYOUT information. super.format(conf, zk); } } LedgerLayout.java000066400000000000000000000175141244507361200336370ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metapackage org.apache.bookkeeper.meta; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.IOException; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This class encapsulates ledger layout information that is persistently stored * in zookeeper. It provides parsing and serialization methods of such information. * */ class LedgerLayout { static final Logger LOG = LoggerFactory.getLogger(LedgerLayout.class); // version of compability layout version public static final int LAYOUT_MIN_COMPAT_VERSION = 1; // version of ledger layout metadata public static final int LAYOUT_FORMAT_VERSION = 2; /** * Read ledger layout from zookeeper * * @param zk ZooKeeper Client * @param ledgersRoot Root of the ledger namespace to check * @return ledger layout, or null if none set in zookeeper */ public static LedgerLayout readLayout(final ZooKeeper zk, final String ledgersRoot) throws IOException, KeeperException { String ledgersLayout = ledgersRoot + "/" + BookKeeperConstants.LAYOUT_ZNODE; try { LedgerLayout layout; try { byte[] layoutData = zk.getData(ledgersLayout, false, null); layout = parseLayout(layoutData); } catch (KeeperException.NoNodeException nne) { return null; } return layout; } catch (InterruptedException ie) { throw new IOException(ie); } } static final String splitter = ":"; static final String lSplitter = "\n"; // ledger manager factory class private String managerFactoryCls; // ledger manager version private int managerVersion; // layout version of how to store layout information private int layoutFormatVersion = LAYOUT_FORMAT_VERSION; /** * Ledger Layout Constructor * * @param managerFactoryCls * Ledger Manager Factory Class * @param managerVersion * Ledger Manager Version * @param layoutFormatVersion * Ledger Layout Format Version */ public LedgerLayout(String managerFactoryCls, int managerVersion) { this(managerFactoryCls, managerVersion, LAYOUT_FORMAT_VERSION); } LedgerLayout(String managerFactoryCls, int managerVersion, int layoutVersion) { this.managerFactoryCls = managerFactoryCls; this.managerVersion = managerVersion; this.layoutFormatVersion = layoutVersion; } /** * Get Ledger Manager Type * * @return ledger manager type * @deprecated replaced by {@link #getManagerFactoryClass()} */ @Deprecated public String getManagerType() { // pre V2 layout store as manager type return this.managerFactoryCls; } /** * Get ledger manager factory class * * @return ledger manager factory class */ public String getManagerFactoryClass() { return this.managerFactoryCls; } public int getManagerVersion() { return this.managerVersion; } /** * Return layout format version * * @return layout format version */ public int getLayoutFormatVersion() { return this.layoutFormatVersion; } /** * Store the ledger layout into zookeeper */ public void store(final ZooKeeper zk, String ledgersRoot) throws IOException, KeeperException, InterruptedException { String ledgersLayout = ledgersRoot + "/" + BookKeeperConstants.LAYOUT_ZNODE; zk.create(ledgersLayout, serialize(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } /** * Delete the LAYOUT from zookeeper */ public void delete(final ZooKeeper zk, String ledgersRoot) throws KeeperException, InterruptedException { String ledgersLayout = ledgersRoot + "/" + BookKeeperConstants.LAYOUT_ZNODE; zk.delete(ledgersLayout, -1); } /** * Generates a byte array based on the LedgerLayout object. * * @return byte[] */ private byte[] serialize() throws IOException { String s = new StringBuilder().append(layoutFormatVersion).append(lSplitter) .append(managerFactoryCls).append(splitter).append(managerVersion).toString(); LOG.debug("Serialized layout info: {}", s); return s.getBytes("UTF-8"); } /** * Parses a given byte array and transforms into a LedgerLayout object * * @param bytes * byte array to parse * @param znodeVersion * version of znode * @return LedgerLayout * @throws IOException * if the given byte[] cannot be parsed */ private static LedgerLayout parseLayout(byte[] bytes) throws IOException { String layout = new String(bytes, "UTF-8"); LOG.debug("Parsing Layout: {}", layout); String lines[] = layout.split(lSplitter); try { int layoutFormatVersion = new Integer(lines[0]); if (LAYOUT_FORMAT_VERSION < layoutFormatVersion || LAYOUT_MIN_COMPAT_VERSION > layoutFormatVersion) { throw new IOException("Metadata version not compatible. Expected " + LAYOUT_FORMAT_VERSION + ", but got " + layoutFormatVersion); } if (lines.length < 2) { throw new IOException("Ledger manager and its version absent from layout: " + layout); } String[] parts = lines[1].split(splitter); if (parts.length != 2) { throw new IOException("Invalid Ledger Manager defined in layout : " + layout); } // ledger manager factory class String managerFactoryCls = parts[0]; // ledger manager version int managerVersion = new Integer(parts[1]); return new LedgerLayout(managerFactoryCls, managerVersion, layoutFormatVersion); } catch (NumberFormatException e) { throw new IOException("Could not parse layout '" + layout + "'", e); } } @Override public boolean equals(Object obj) { if (null == obj) { return false; } if (!(obj instanceof LedgerLayout)) { return false; } LedgerLayout other = (LedgerLayout)obj; return managerFactoryCls.equals(other.managerFactoryCls) && managerVersion == other.managerVersion; } @Override public int hashCode() { return (managerFactoryCls + managerVersion).hashCode(); } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("LV").append(layoutFormatVersion).append(":") .append(",Type:").append(managerFactoryCls).append(":") .append(managerVersion); return sb.toString(); } } LedgerManager.java000066400000000000000000000163251244507361200337330ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metapackage org.apache.bookkeeper.meta; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.Closeable; import java.io.IOException; import java.util.Set; import java.util.SortedSet; import java.util.TreeSet; import org.apache.zookeeper.AsyncCallback; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.LedgerMetadataListener; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.versioning.Version; /** * LedgerManager takes responsibility of ledger management in client side. * *
    *
  • How to store ledger meta (e.g. in ZooKeeper or other key/value store) *
*/ public interface LedgerManager extends Closeable { /** * Create a new ledger with provided metadata * * @param metadata * Metadata provided when creating a new ledger * @param cb * Callback when creating a new ledger. * {@link BKException.Code.ZKException} return code when can't generate * or extract new ledger id */ public void createLedger(LedgerMetadata metadata, GenericCallback cb); /** * Remove a specified ledger metadata by ledgerId and version. * * @param ledgerId * Ledger Id * @param version * Ledger metadata version * @param cb * Callback when removed ledger metadata. * {@link BKException.Code.MetadataVersionException} return code when version doesn't match, * {@link BKException.Code.NoSuchLedgerExistsException} return code when ledger doesn't exist, * {@link BKException.Code.ZKException} return code when other issues happen. */ public void removeLedgerMetadata(long ledgerId, Version version, GenericCallback vb); /** * Read ledger metadata of a specified ledger. * * @param ledgerId * Ledger Id * @param readCb * Callback when read ledger metadata. * {@link BKException.Code.NoSuchLedgerExistsException} return code when ledger doesn't exist, * {@link BKException.Code.ZKException} return code when other issues happen. */ public void readLedgerMetadata(long ledgerId, GenericCallback readCb); /** * Write ledger metadata. * * @param ledgerId * Ledger Id * @param metadata * Ledger Metadata to write * @param cb * Callback when finished writing ledger metadata. * {@link BKException.Code.MetadataVersionException} return code when version doesn't match, * {@link BKException.Code.ZKException} return code when other issues happen. */ public void writeLedgerMetadata(long ledgerId, LedgerMetadata metadata, GenericCallback cb); /** * Register the ledger metadata listener on ledgerId. * * @param ledgerId * ledger id. * @param listener * listener. */ public abstract void registerLedgerMetadataListener(long ledgerId, LedgerMetadataListener listener); /** * Unregister the ledger metadata listener on ledgerId. * * @param ledgerId * ledger id. * @param listener * ledger metadata listener. */ public abstract void unregisterLedgerMetadataListener(long ledgerId, LedgerMetadataListener listener); /** * Loop to process all ledgers. *

*

    * After all ledgers were processed, finalCb will be triggerred: *
  • if all ledgers are processed done with OK, success rc will be passed to finalCb. *
  • if some ledgers are prcoessed failed, failure rc will be passed to finalCb. *
*

* * @param processor * Ledger Processor to process a specific ledger * @param finalCb * Callback triggered after all ledgers are processed * @param context * Context of final callback * @param successRc * Success RC code passed to finalCb when callback * @param failureRc * Failure RC code passed to finalCb when exceptions occured. */ public void asyncProcessLedgers(Processor processor, AsyncCallback.VoidCallback finalCb, Object context, int successRc, int failureRc); /** * Loop to scan a range of metadata from metadata storage * * @return will return a iterator of the Ranges */ public LedgerRangeIterator getLedgerRanges(); /* * Used to represent the Ledgers range returned from the * current scan. */ public static class LedgerRange { // returned ledgers private final SortedSet ledgers; public LedgerRange(Set ledgers) { this.ledgers = new TreeSet(ledgers); } public int size() { return this.ledgers.size(); } public Long start() { return ledgers.first(); } public Long end() { return ledgers.last(); } public Set getLedgers() { return this.ledgers; } } /** * Interface of the ledger meta range iterator from * storage (e.g. in ZooKeeper or other key/value store) */ interface LedgerRangeIterator { /** * @return true if there are records in the ledger metadata store. false * only when there are indeed no records in ledger metadata store. * @throws IOException thrown when there is any problem accessing the ledger * metadata store. It is critical that it doesn't return false in the case * in the case it fails to access the ledger metadata store. Otherwise it * will end up deleting all ledgers by accident. */ public boolean hasNext() throws IOException; /** * Get the next element. * * @return the next element. * @throws IOException thrown when there is a problem accessing the ledger * metadata store. It is critical that it doesn't return false in the case * in the case it fails to access the ledger metadata store. Otherwise it * will end up deleting all ledgers by accident. */ public LedgerRange next() throws IOException; } } LedgerManagerFactory.java000066400000000000000000000236611244507361200352640ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metapackage org.apache.bookkeeper.meta; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.IOException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.bookkeeper.replication.ReplicationException; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.bookkeeper.util.ReflectionUtils; import org.apache.commons.configuration.ConfigurationException; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; public abstract class LedgerManagerFactory { static final Logger LOG = LoggerFactory.getLogger(LedgerManagerFactory.class); // v1 layout static final int V1 = 1; /** * Return current factory version. * * @return current version used by factory. */ public abstract int getCurrentVersion(); /** * Initialize a factory. * * @param conf * Configuration object used to initialize factory * @param zk * Available zookeeper handle for ledger manager to use. * @param factoryVersion * What version used to initialize factory. * @return ledger manager factory instance * @throws IOException when fail to initialize the factory. */ public abstract LedgerManagerFactory initialize(final AbstractConfiguration conf, final ZooKeeper zk, final int factoryVersion) throws IOException; /** * Uninitialize the factory. * * @throws IOException when fail to uninitialize the factory. */ public abstract void uninitialize() throws IOException; /** * return ledger manager for client-side to manage ledger metadata. * * @return ledger manager * @see LedgerManager */ public abstract LedgerManager newLedgerManager(); /** * Return a ledger underreplication manager, which is used to * mark ledgers as unreplicated, and to retrieve a ledger which * is underreplicated so that it can be rereplicated. * * @return ledger underreplication manager * @see LedgerUnderreplicationManager */ public abstract LedgerUnderreplicationManager newLedgerUnderreplicationManager() throws KeeperException, InterruptedException, ReplicationException.CompatibilityException; /** * Create new Ledger Manager Factory. * * @param conf * Configuration Object. * @param zk * ZooKeeper Client Handle, talk to zk to know which ledger manager is used. * @return new ledger manager factory * @throws IOException */ public static LedgerManagerFactory newLedgerManagerFactory( final AbstractConfiguration conf, final ZooKeeper zk) throws IOException, KeeperException, InterruptedException { Class factoryClass; try { factoryClass = conf.getLedgerManagerFactoryClass(); } catch (Exception e) { throw new IOException("Failed to get ledger manager factory class from configuration : ", e); } String ledgerRootPath = conf.getZkLedgersRootPath(); if (null == ledgerRootPath || ledgerRootPath.length() == 0) { throw new IOException("Empty Ledger Root Path."); } // if zk is null, return the default ledger manager if (zk == null) { return new FlatLedgerManagerFactory() .initialize(conf, null, FlatLedgerManagerFactory.CUR_VERSION); } LedgerManagerFactory lmFactory; // check that the configured ledger manager is // compatible with the existing layout LedgerLayout layout = LedgerLayout.readLayout(zk, ledgerRootPath); if (layout == null) { // no existing layout lmFactory = createNewLMFactory(conf, zk, factoryClass); return lmFactory .initialize(conf, zk, lmFactory.getCurrentVersion()); } LOG.debug("read ledger layout {}", layout); // there is existing layout, we need to look into the layout. // handle pre V2 layout if (layout.getLayoutFormatVersion() <= V1) { // pre V2 layout we use type of ledger manager String lmType = conf.getLedgerManagerType(); if (lmType != null && !layout.getManagerType().equals(lmType)) { throw new IOException("Configured layout " + lmType + " does not match existing layout " + layout.getManagerType()); } // create the ledger manager if (FlatLedgerManagerFactory.NAME.equals(layout.getManagerType())) { lmFactory = new FlatLedgerManagerFactory(); } else if (HierarchicalLedgerManagerFactory.NAME.equals(layout.getManagerType())) { lmFactory = new HierarchicalLedgerManagerFactory(); } else { throw new IOException("Unknown ledger manager type: " + lmType); } return lmFactory.initialize(conf, zk, layout.getManagerVersion()); } // handle V2 layout case if (factoryClass != null && !layout.getManagerFactoryClass().equals(factoryClass.getName())) { throw new IOException("Configured layout " + factoryClass.getName() + " does not match existing layout " + layout.getManagerFactoryClass()); } if (factoryClass == null) { // no factory specified in configuration try { Class theCls = Class.forName(layout.getManagerFactoryClass()); if (!LedgerManagerFactory.class.isAssignableFrom(theCls)) { throw new IOException("Wrong ledger manager factory " + layout.getManagerFactoryClass()); } factoryClass = theCls.asSubclass(LedgerManagerFactory.class); } catch (ClassNotFoundException cnfe) { throw new IOException("Failed to instantiate ledger manager factory " + layout.getManagerFactoryClass()); } } // instantiate a factory lmFactory = ReflectionUtils.newInstance(factoryClass); return lmFactory.initialize(conf, zk, layout.getManagerVersion()); } /** * Creates the new layout and stores in zookeeper and returns the * LedgerManagerFactory instance. */ private static LedgerManagerFactory createNewLMFactory( final AbstractConfiguration conf, final ZooKeeper zk, Class factoryClass) throws IOException, KeeperException, InterruptedException { String ledgerRootPath = conf.getZkLedgersRootPath(); LedgerManagerFactory lmFactory; LedgerLayout layout; // use default ledger manager factory if no one provided if (factoryClass == null) { // for backward compatibility, check manager type String lmType = conf.getLedgerManagerType(); if (lmType == null) { factoryClass = FlatLedgerManagerFactory.class; } else { if (FlatLedgerManagerFactory.NAME.equals(lmType)) { factoryClass = FlatLedgerManagerFactory.class; } else if (HierarchicalLedgerManagerFactory.NAME.equals(lmType)) { factoryClass = HierarchicalLedgerManagerFactory.class; } else { throw new IOException("Unknown ledger manager type: " + lmType); } } } lmFactory = ReflectionUtils.newInstance(factoryClass); layout = new LedgerLayout(factoryClass.getName(), lmFactory.getCurrentVersion()); try { layout.store(zk, ledgerRootPath); } catch (KeeperException.NodeExistsException nee) { LedgerLayout layout2 = LedgerLayout.readLayout(zk, ledgerRootPath); if (!layout2.equals(layout)) { throw new IOException( "Contention writing to layout to zookeeper, " + " other layout " + layout2 + " is incompatible with our " + "layout " + layout); } } return lmFactory; } /** * Format the ledger metadata for LedgerManager * * @param conf * Configuration instance * @param zk * Zookeeper instance */ public void format(final AbstractConfiguration conf, final ZooKeeper zk) throws InterruptedException, KeeperException, IOException { Class factoryClass; try { factoryClass = conf.getLedgerManagerFactoryClass(); } catch (ConfigurationException e) { throw new IOException("Failed to get ledger manager factory class from configuration : ", e); } LedgerLayout layout = LedgerLayout.readLayout(zk, conf.getZkLedgersRootPath()); layout.delete(zk, conf.getZkLedgersRootPath()); // Create new layout information again. createNewLMFactory(conf, zk, factoryClass); } } LedgerUnderreplicationManager.java000066400000000000000000000103741244507361200371610ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.meta; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.replication.ReplicationException; import java.util.Iterator; /** * Interface for marking ledgers which need to be rereplicated */ public interface LedgerUnderreplicationManager { /** * Mark a ledger as underreplicated. The replication should * then check which fragments are underreplicated and rereplicate them */ void markLedgerUnderreplicated(long ledgerId, String missingReplica) throws ReplicationException.UnavailableException; /** * Mark a ledger as fully replicated. If the ledger is not * already marked as underreplicated, this is a noop. */ void markLedgerReplicated(long ledgerId) throws ReplicationException.UnavailableException; /** * Get a list of all the ledgers which have been * marked for rereplication. * * @return an iterator which returns ledger ids */ Iterator listLedgersToRereplicate(); /** * Acquire a underreplicated ledger for rereplication. The ledger * should be locked, so that no other agent will receive the ledger * from this call. * The ledger should remain locked until either #markLedgerComplete * or #releaseLedger are called. * This call is blocking, so will not return until a ledger is * available for rereplication. */ long getLedgerToRereplicate() throws ReplicationException.UnavailableException; /** * Poll for a underreplicated ledger to rereplicate. * @see #getLedgerToRereplicate * @return the ledgerId, or -1 if none are available */ long pollLedgerToRereplicate() throws ReplicationException.UnavailableException; /** * Release a previously acquired ledger. This allows others to acquire * the ledger */ void releaseUnderreplicatedLedger(long ledgerId) throws ReplicationException.UnavailableException; /** * Release all resources held by the ledger underreplication manager */ void close() throws ReplicationException.UnavailableException; /** * Stop ledger replication. Currently running ledger rereplication tasks * will be continued and will be stopped from next task. This will block * ledger replication {@link #Auditor} and {@link #getLedgerToRereplicate()} * tasks */ void disableLedgerReplication() throws ReplicationException.UnavailableException; /** * Resuming ledger replication. This will allow ledger replication * {@link #Auditor} and {@link #getLedgerToRereplicate()} tasks to continue */ void enableLedgerReplication() throws ReplicationException.UnavailableException; /** * Check whether the ledger replication is enabled or not. This will return * true if the ledger replication is enabled, otherwise return false * * @return - return true if it is enabled otherwise return false */ boolean isLedgerReplicationEnabled() throws ReplicationException.UnavailableException; /** * Receive notification asynchronously when the ledger replication process * is enabled * * @param cb * - callback implementation to receive the notification */ void notifyLedgerReplicationEnabled(GenericCallback cb) throws ReplicationException.UnavailableException; } MSLedgerManagerFactory.java000066400000000000000000000773121244507361200355260ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.meta; import static org.apache.bookkeeper.metastore.MetastoreTable.ALL_FIELDS; import static org.apache.bookkeeper.metastore.MetastoreTable.NON_FIELDS; import java.io.IOException; import java.util.HashSet; import java.util.Iterator; import java.util.Set; import java.util.SortedSet; import java.util.TreeSet; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.bookkeeper.meta.AbstractZkLedgerManager.ReadLedgerMetadataTask; import org.apache.bookkeeper.metastore.MSException; import org.apache.bookkeeper.metastore.MSWatchedEvent; import org.apache.bookkeeper.metastore.MetaStore; import org.apache.bookkeeper.metastore.MetastoreCallback; import org.apache.bookkeeper.metastore.MetastoreCursor; import org.apache.bookkeeper.metastore.MetastoreCursor.ReadEntriesCallback; import org.apache.bookkeeper.metastore.MetastoreException; import org.apache.bookkeeper.metastore.MetastoreFactory; import org.apache.bookkeeper.metastore.MetastoreScannableTable; import org.apache.bookkeeper.metastore.MetastoreTableItem; import org.apache.bookkeeper.metastore.MetastoreWatcher; import org.apache.bookkeeper.metastore.MSWatchedEvent.EventType; import org.apache.bookkeeper.metastore.Value; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.LedgerMetadataListener; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.replication.ReplicationException; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.AsyncCallback.StringCallback; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * MetaStore Based Ledger Manager Factory */ public class MSLedgerManagerFactory extends LedgerManagerFactory { static Logger LOG = LoggerFactory.getLogger(MSLedgerManagerFactory.class); static int MS_CONNECT_BACKOFF_MS = 200; public static final int CUR_VERSION = 1; public static final String TABLE_NAME = "LEDGER"; public static final String META_FIELD = ".META"; AbstractConfiguration conf; ZooKeeper zk; MetaStore metastore; @Override public int getCurrentVersion() { return CUR_VERSION; } @Override public LedgerManagerFactory initialize(final AbstractConfiguration conf, final ZooKeeper zk, final int factoryVersion) throws IOException { if (CUR_VERSION != factoryVersion) { throw new IOException("Incompatible layout version found : " + factoryVersion); } this.conf = conf; this.zk = zk; // load metadata store String msName = conf.getMetastoreImplClass(); try { metastore = MetastoreFactory.createMetaStore(msName); // TODO: should record version in somewhere. e.g. ZooKeeper int msVersion = metastore.getVersion(); metastore.init(conf, msVersion); } catch (Throwable t) { throw new IOException("Failed to initialize metastore " + msName + " : ", t); } return this; } @Override public void uninitialize() throws IOException { metastore.close(); } static Long key2LedgerId(String key) { return null == key ? null : Long.parseLong(key, 10); } static String ledgerId2Key(Long lid) { return null == lid ? null : StringUtils.getZKStringId(lid); } static String rangeToString(Long firstLedger, boolean firstInclusive, Long lastLedger, boolean lastInclusive) { StringBuilder sb = new StringBuilder(); sb.append(firstInclusive ? "[ " : "( ").append(firstLedger).append(" ~ ").append(lastLedger) .append(lastInclusive ? " ]" : " )"); return sb.toString(); } static SortedSet entries2Ledgers(Iterator entries) { SortedSet ledgers = new TreeSet(); while (entries.hasNext()) { MetastoreTableItem item = entries.next(); try { ledgers.add(key2LedgerId(item.getKey())); } catch (NumberFormatException nfe) { LOG.warn("Found invalid ledger key {}", item.getKey()); } } return ledgers; } static class SyncResult { T value; int rc; boolean finished = false; public synchronized void complete(int rc, T value) { this.rc = rc; this.value = value; finished = true; notify(); } public synchronized void block() { try { while (!finished) { wait(); } } catch (InterruptedException ie) { } } public synchronized int getRetCode() { return rc; } public synchronized T getResult() { return value; } } static class MsLedgerManager implements LedgerManager, MetastoreWatcher { final ZooKeeper zk; final AbstractConfiguration conf; final MetaStore metastore; final MetastoreScannableTable ledgerTable; final int maxEntriesPerScan; static final String IDGEN_ZNODE = "ms-idgen"; static final String IDGENERATION_PREFIX = "/" + IDGEN_ZNODE + "/ID-"; // ledger metadata listeners protected final ConcurrentMap> listeners = new ConcurrentHashMap>(); // Path to generate global id private final String idGenPath; // we use this to prevent long stack chains from building up in // callbacks ScheduledExecutorService scheduler; protected class ReadLedgerMetadataTask implements Runnable, GenericCallback { final long ledgerId; ReadLedgerMetadataTask(long ledgerId) { this.ledgerId = ledgerId; } @Override public void run() { if (null != listeners.get(ledgerId)) { LOG.debug("Re-read ledger metadata for {}.", ledgerId); readLedgerMetadata(ledgerId, ReadLedgerMetadataTask.this); } else { LOG.debug("Ledger metadata listener for ledger {} is already removed.", ledgerId); } } @Override public void operationComplete(int rc, final LedgerMetadata result) { if (BKException.Code.OK == rc) { final Set listenerSet = listeners.get(ledgerId); if (null != listenerSet) { LOG.debug("Ledger metadata is changed for {} : {}.", ledgerId, result); scheduler.submit(new Runnable() { @Override public void run() { synchronized(listenerSet){ for (LedgerMetadataListener listener : listenerSet) { listener.onChanged(ledgerId, result); } } } }); } } else if (BKException.Code.NoSuchLedgerExistsException == rc) { // the ledger is removed, do nothing Set listenerSet = listeners.remove(ledgerId); if (null != listenerSet) { LOG.debug("Removed ledger metadata listener set on ledger {} as its ledger is deleted : {}", ledgerId, listenerSet.size()); } } else { LOG.warn("Failed on read ledger metadata of ledger {} : {}", ledgerId, rc); scheduler.schedule(this, MS_CONNECT_BACKOFF_MS, TimeUnit.MILLISECONDS); } } } MsLedgerManager(final AbstractConfiguration conf, final ZooKeeper zk, final MetaStore metastore) { this.conf = conf; this.zk = zk; this.metastore = metastore; try { ledgerTable = metastore.createScannableTable(TABLE_NAME); } catch (MetastoreException mse) { LOG.error("Failed to instantiate table " + TABLE_NAME + " in metastore " + metastore.getName()); throw new RuntimeException("Failed to instantiate table " + TABLE_NAME + " in metastore " + metastore.getName()); } // configuration settings maxEntriesPerScan = conf.getMetastoreMaxEntriesPerScan(); this.idGenPath = conf.getZkLedgersRootPath() + IDGENERATION_PREFIX; this.scheduler = Executors.newSingleThreadScheduledExecutor(); } @Override public void process(MSWatchedEvent e){ long ledgerId = key2LedgerId(e.getKey()); switch(e.getType()) { case CHANGED: new ReadLedgerMetadataTask(key2LedgerId(e.getKey())).run(); break; case REMOVED: Set listenerSet = listeners.get(ledgerId); if (listenerSet != null) { synchronized (listenerSet) { for(LedgerMetadataListener l : listenerSet){ unregisterLedgerMetadataListener(ledgerId, l); l.onChanged( ledgerId, null ); } } } break; default: LOG.warn("Unknown type: {}", e.getType()); break; } } @Override public void registerLedgerMetadataListener(long ledgerId, LedgerMetadataListener listener) { if (null != listener) { LOG.info("Registered ledger metadata listener {} on ledger {}.", listener, ledgerId); Set listenerSet = listeners.get(ledgerId); if (listenerSet == null) { Set newListenerSet = new HashSet(); Set oldListenerSet = listeners.putIfAbsent(ledgerId, newListenerSet); if (null != oldListenerSet) { listenerSet = oldListenerSet; } else { listenerSet = newListenerSet; } } synchronized (listenerSet) { listenerSet.add(listener); } new ReadLedgerMetadataTask(ledgerId).run(); } } @Override public void unregisterLedgerMetadataListener(long ledgerId, LedgerMetadataListener listener) { Set listenerSet = listeners.get(ledgerId); if (listenerSet != null) { synchronized (listenerSet) { if (listenerSet.remove(listener)) { LOG.info("Unregistered ledger metadata listener {} on ledger {}.", listener, ledgerId); } if (listenerSet.isEmpty()) { listeners.remove(ledgerId, listenerSet); } } } } @Override public void close() { try { scheduler.shutdown(); } catch (Exception e) { LOG.warn("Error when closing MsLedgerManager : ", e); } ledgerTable.close(); } @Override public void createLedger(final LedgerMetadata metadata, final GenericCallback ledgerCb) { ZkUtils.asyncCreateFullPathOptimistic(zk, idGenPath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL, new StringCallback() { @Override public void processResult(int rc, String path, Object ctx, final String idPathName) { if (rc != KeeperException.Code.OK.intValue()) { LOG.error("Could not generate new ledger id", KeeperException.create(KeeperException.Code.get(rc), path)); ledgerCb.operationComplete(BKException.Code.ZKException, null); return; } /* * Extract ledger id from gen path */ long ledgerId; try { ledgerId = getLedgerIdFromGenPath(idPathName); } catch (IOException e) { LOG.error("Could not extract ledger-id from id gen path:" + path, e); ledgerCb.operationComplete(BKException.Code.ZKException, null); return; } final long lid = ledgerId; MetastoreCallback msCallback = new MetastoreCallback() { @Override public void complete(int rc, Version version, Object ctx) { if (MSException.Code.BadVersion.getCode() == rc) { ledgerCb.operationComplete(BKException.Code.MetadataVersionException, null); return; } if (MSException.Code.OK.getCode() != rc) { ledgerCb.operationComplete(BKException.Code.MetaStoreException, null); return; } LOG.debug("Create ledger {} with version {} successfuly.", new Object[] { lid, version }); // update version metadata.setVersion(version); ledgerCb.operationComplete(BKException.Code.OK, lid); } }; ledgerTable.put(ledgerId2Key(lid), new Value().setField(META_FIELD, metadata.serialize()), Version.NEW, msCallback, null); zk.delete(idPathName, -1, new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (rc != KeeperException.Code.OK.intValue()) { LOG.warn("Exception during deleting znode for id generation : ", KeeperException.create(KeeperException.Code.get(rc), path)); } else { LOG.debug("Deleting znode for id generation : {}", idPathName); } } }, null); } }, null); } // get ledger id from generation path private long getLedgerIdFromGenPath(String nodeName) throws IOException { long ledgerId; try { String parts[] = nodeName.split(IDGENERATION_PREFIX); ledgerId = Long.parseLong(parts[parts.length - 1]); } catch (NumberFormatException e) { throw new IOException(e); } return ledgerId; } @Override public void removeLedgerMetadata(final long ledgerId, final Version version, final GenericCallback cb) { MetastoreCallback msCallback = new MetastoreCallback() { @Override public void complete(int rc, Void value, Object ctx) { int bkRc; if (MSException.Code.NoKey.getCode() == rc) { LOG.warn("Ledger entry does not exist in meta table: ledgerId={}", ledgerId); bkRc = BKException.Code.NoSuchLedgerExistsException; } else if (MSException.Code.OK.getCode() == rc) { bkRc = BKException.Code.OK; } else { bkRc = BKException.Code.MetaStoreException; } cb.operationComplete(bkRc, (Void) null); } }; ledgerTable.remove(ledgerId2Key(ledgerId), version, msCallback, null); } @Override public void readLedgerMetadata(final long ledgerId, final GenericCallback readCb) { final String key = ledgerId2Key(ledgerId); MetastoreCallback> msCallback = new MetastoreCallback>() { @Override public void complete(int rc, Versioned value, Object ctx) { if (MSException.Code.NoKey.getCode() == rc) { LOG.error("No ledger metadata found for ledger " + ledgerId + " : ", MSException.create(MSException.Code.get(rc), "No key " + key + " found.")); readCb.operationComplete(BKException.Code.NoSuchLedgerExistsException, null); return; } if (MSException.Code.OK.getCode() != rc) { LOG.error("Could not read metadata for ledger " + ledgerId + " : ", MSException.create(MSException.Code.get(rc), "Failed to get key " + key)); readCb.operationComplete(BKException.Code.MetaStoreException, null); return; } LedgerMetadata metadata; try { metadata = LedgerMetadata .parseConfig(value.getValue().getField(META_FIELD), value.getVersion()); } catch (IOException e) { LOG.error("Could not parse ledger metadata for ledger " + ledgerId + " : ", e); readCb.operationComplete(BKException.Code.MetaStoreException, null); return; } readCb.operationComplete(BKException.Code.OK, metadata); } }; ledgerTable.get(key, this, msCallback, ALL_FIELDS); } @Override public void writeLedgerMetadata(final long ledgerId, final LedgerMetadata metadata, final GenericCallback cb) { Value data = new Value().setField(META_FIELD, metadata.serialize()); LOG.debug("Writing ledger {} metadata, version {}", new Object[] { ledgerId, metadata.getVersion() }); final String key = ledgerId2Key(ledgerId); MetastoreCallback msCallback = new MetastoreCallback() { @Override public void complete(int rc, Version version, Object ctx) { int bkRc; if (MSException.Code.BadVersion.getCode() == rc) { LOG.info("Bad version provided to updat metadata for ledger {}", ledgerId); bkRc = BKException.Code.MetadataVersionException; } else if (MSException.Code.NoKey.getCode() == rc) { LOG.warn("Ledger {} doesn't exist when writing its ledger metadata.", ledgerId); bkRc = BKException.Code.NoSuchLedgerExistsException; } else if (MSException.Code.OK.getCode() == rc) { metadata.setVersion(version); bkRc = BKException.Code.OK; } else { LOG.warn("Conditional update ledger metadata failed: ", MSException.create(MSException.Code.get(rc), "Failed to put key " + key)); bkRc = BKException.Code.MetaStoreException; } cb.operationComplete(bkRc, null); } }; ledgerTable.put(key, data, metadata.getVersion(), msCallback, null); } @Override public void asyncProcessLedgers(final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object context, final int successRc, final int failureRc) { MetastoreCallback openCursorCb = new MetastoreCallback() { @Override public void complete(int rc, MetastoreCursor cursor, Object ctx) { if (MSException.Code.OK.getCode() != rc) { finalCb.processResult(failureRc, null, context); return; } if (!cursor.hasMoreEntries()) { finalCb.processResult(successRc, null, context); return; } asyncProcessLedgers(cursor, processor, finalCb, context, successRc, failureRc); } }; ledgerTable.openCursor(NON_FIELDS, openCursorCb, null); } void asyncProcessLedgers(final MetastoreCursor cursor, final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object context, final int successRc, final int failureRc) { scheduler.submit(new Runnable() { @Override public void run() { doAsyncProcessLedgers(cursor, processor, finalCb, context, successRc, failureRc); } }); } void doAsyncProcessLedgers(final MetastoreCursor cursor, final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object context, final int successRc, final int failureRc) { // no entries now if (!cursor.hasMoreEntries()) { finalCb.processResult(successRc, null, context); return; } ReadEntriesCallback msCallback = new ReadEntriesCallback() { @Override public void complete(int rc, Iterator entries, Object ctx) { if (MSException.Code.OK.getCode() != rc) { finalCb.processResult(failureRc, null, context); return; } SortedSet ledgers = new TreeSet(); while (entries.hasNext()) { MetastoreTableItem item = entries.next(); try { ledgers.add(key2LedgerId(item.getKey())); } catch (NumberFormatException nfe) { LOG.warn("Found invalid ledger key {}", item.getKey()); } } if (0 == ledgers.size()) { // process next batch of ledgers asyncProcessLedgers(cursor, processor, finalCb, context, successRc, failureRc); return; } final long startLedger = ledgers.first(); final long endLedger = ledgers.last(); AsyncSetProcessor setProcessor = new AsyncSetProcessor(scheduler); // process set setProcessor.process(ledgers, processor, new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (successRc != rc) { LOG.error("Failed when processing range " + rangeToString(startLedger, true, endLedger, true)); finalCb.processResult(failureRc, null, context); return; } // process next batch of ledgers asyncProcessLedgers(cursor, processor, finalCb, context, successRc, failureRc); } }, context, successRc, failureRc); } }; cursor.asyncReadEntries(maxEntriesPerScan, msCallback, null); } class MSLedgerRangeIterator implements LedgerRangeIterator { final CountDownLatch openCursorLatch = new CountDownLatch(1); MetastoreCursor cursor = null; // last ledger id in previous range MSLedgerRangeIterator() { MetastoreCallback openCursorCb = new MetastoreCallback() { @Override public void complete(int rc, MetastoreCursor newCursor, Object ctx) { if (MSException.Code.OK.getCode() != rc) { LOG.error("Error opening cursor for ledger range iterator {}", rc); } else { cursor = newCursor; } openCursorLatch.countDown(); } }; ledgerTable.openCursor(NON_FIELDS, openCursorCb, null); } @Override public boolean hasNext() throws IOException { try { openCursorLatch.await(); } catch (InterruptedException ie) { LOG.error("Interrupted waiting for cursor to open", ie); Thread.currentThread().interrupt(); throw new IOException("Interrupted waiting to read range", ie); } if (cursor == null) { throw new IOException("Failed to open ledger range cursor, check logs"); } return cursor.hasMoreEntries(); } @Override public LedgerRange next() throws IOException { try { SortedSet ledgerIds = new TreeSet(); Iterator iter = cursor.readEntries(maxEntriesPerScan); while (iter.hasNext()) { ledgerIds.add(key2LedgerId(iter.next().getKey())); } return new LedgerRange(ledgerIds); } catch (MSException mse) { LOG.error("Exception occurred reading from metastore", mse); throw new IOException("Couldn't read from metastore", mse); } } } @Override public LedgerRangeIterator getLedgerRanges() { return new MSLedgerRangeIterator(); } } @Override public LedgerManager newLedgerManager() { return new MsLedgerManager(conf, zk, metastore); } @Override public LedgerUnderreplicationManager newLedgerUnderreplicationManager() throws KeeperException, InterruptedException, ReplicationException.CompatibilityException { // TODO: currently just use zk ledger underreplication manager return new ZkLedgerUnderreplicationManager(conf, zk); } /** * Process set one by one in asynchronize way. Process will be stopped * immediately when error occurred. */ private static class AsyncSetProcessor { // use this to prevent long stack chains from building up in callbacks ScheduledExecutorService scheduler; /** * Constructor * * @param scheduler * Executor used to prevent long stack chains */ public AsyncSetProcessor(ScheduledExecutorService scheduler) { this.scheduler = scheduler; } /** * Process set of items * * @param data * Set of data to process * @param processor * Callback to process element of list when success * @param finalCb * Final callback to be called after all elements in the list * are processed * @param contxt * Context of final callback * @param successRc * RC passed to final callback on success * @param failureRc * RC passed to final callback on failure */ public void process(final Set data, final Processor processor, final AsyncCallback.VoidCallback finalCb, final Object context, final int successRc, final int failureRc) { if (data == null || data.size() == 0) { finalCb.processResult(successRc, null, context); return; } final Iterator iter = data.iterator(); AsyncCallback.VoidCallback stubCallback = new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (rc != successRc) { // terminal immediately finalCb.processResult(failureRc, null, context); return; } if (!iter.hasNext()) { // reach the end of list finalCb.processResult(successRc, null, context); return; } // process next element final T dataToProcess = iter.next(); final AsyncCallback.VoidCallback stub = this; scheduler.submit(new Runnable() { @Override public final void run() { processor.process(dataToProcess, stub); } }); } }; T firstElement = iter.next(); processor.process(firstElement, stubCallback); } } } ZkLedgerUnderreplicationManager.java000066400000000000000000000647061244507361200374760ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.meta; import org.apache.bookkeeper.replication.ReplicationEnableCb; import org.apache.bookkeeper.replication.ReplicationException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.net.DNS; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.DataFormats.LedgerRereplicationLayoutFormat; import org.apache.bookkeeper.proto.DataFormats.UnderreplicatedLedgerFormat; import org.apache.bookkeeper.proto.DataFormats.LockDataFormat; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.data.Stat; import org.apache.zookeeper.ZooDefs.Ids; import com.google.common.annotations.VisibleForTesting; import com.google.protobuf.TextFormat; import com.google.common.base.Joiner; import static com.google.common.base.Charsets.UTF_8; import java.net.UnknownHostException; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ConcurrentHashMap; import java.util.Map; import java.util.List; import java.util.Collections; import java.util.Arrays; import java.util.Deque; import java.util.ArrayDeque; import java.util.Iterator; import java.util.LinkedList; import java.util.Queue; import java.util.ArrayList; import java.util.regex.Pattern; import java.util.regex.Matcher; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * ZooKeeper implementation of underreplication manager. * This is implemented in a heirarchical fashion, so it'll work with * FlatLedgerManagerFactory and HierarchicalLedgerManagerFactory. * * Layout is: * /root/underreplication/ LAYOUT * ledgers/(hierarchicalpath)/urL(ledgerId) * locks/(ledgerId) * * The hierarchical path is created by splitting the ledger into 4 2byte * segments which are represented in hexidecimal. * e.g. For ledger id 0xcafebeef0000feed, the path is * cafe/beef/0000/feed/ */ public class ZkLedgerUnderreplicationManager implements LedgerUnderreplicationManager { static final Logger LOG = LoggerFactory.getLogger(ZkLedgerUnderreplicationManager.class); static final String LAYOUT="BASIC"; static final int LAYOUT_VERSION=1; private static class Lock { private final String lockZNode; private final int ledgerZNodeVersion; Lock(String lockZNode, int ledgerZNodeVersion) { this.lockZNode = lockZNode; this.ledgerZNodeVersion = ledgerZNodeVersion; } String getLockZNode() { return lockZNode; } int getLedgerZNodeVersion() { return ledgerZNodeVersion; } }; private final Map heldLocks = new ConcurrentHashMap(); private final Pattern idExtractionPattern; private final String basePath; private final String urLedgerPath; private final String urLockPath; private final String layoutZNode; private final LockDataFormat lockData; private final ZooKeeper zkc; public ZkLedgerUnderreplicationManager(AbstractConfiguration conf, ZooKeeper zkc) throws KeeperException, InterruptedException, ReplicationException.CompatibilityException { basePath = conf.getZkLedgersRootPath() + '/' + BookKeeperConstants.UNDER_REPLICATION_NODE; layoutZNode = basePath + '/' + BookKeeperConstants.LAYOUT_ZNODE; urLedgerPath = basePath + BookKeeperConstants.DEFAULT_ZK_LEDGERS_ROOT_PATH; urLockPath = basePath + "/locks"; idExtractionPattern = Pattern.compile("urL(\\d+)$"); this.zkc = zkc; LockDataFormat.Builder lockDataBuilder = LockDataFormat.newBuilder(); try { lockDataBuilder.setBookieId(DNS.getDefaultHost("default")); } catch (UnknownHostException uhe) { // if we cant get the address, ignore. it's optional // in the data structure in any case } lockData = lockDataBuilder.build(); checkLayout(); } private void checkLayout() throws KeeperException, InterruptedException, ReplicationException.CompatibilityException { if (zkc.exists(basePath, false) == null) { try { zkc.create(basePath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException.NodeExistsException nee) { // do nothing, someone each could have created it } } while (true) { if (zkc.exists(layoutZNode, false) == null) { LedgerRereplicationLayoutFormat.Builder builder = LedgerRereplicationLayoutFormat.newBuilder(); builder.setType(LAYOUT).setVersion(LAYOUT_VERSION); try { zkc.create(layoutZNode, TextFormat.printToString(builder.build()).getBytes(UTF_8), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException.NodeExistsException nne) { // someone else managed to create it continue; } } else { byte[] layoutData = zkc.getData(layoutZNode, false, null); LedgerRereplicationLayoutFormat.Builder builder = LedgerRereplicationLayoutFormat.newBuilder(); try { TextFormat.merge(new String(layoutData, UTF_8), builder); LedgerRereplicationLayoutFormat layout = builder.build(); if (!layout.getType().equals(LAYOUT) || layout.getVersion() != LAYOUT_VERSION) { throw new ReplicationException.CompatibilityException( "Incompatible layout found (" + LAYOUT + ":" + LAYOUT_VERSION + ")"); } } catch (TextFormat.ParseException pe) { throw new ReplicationException.CompatibilityException( "Invalid data found", pe); } break; } } if (zkc.exists(urLedgerPath, false) == null) { try { zkc.create(urLedgerPath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException.NodeExistsException nee) { // do nothing, someone each could have created it } } if (zkc.exists(urLockPath, false) == null) { try { zkc.create(urLockPath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException.NodeExistsException nee) { // do nothing, someone each could have created it } } } private long getLedgerId(String path) throws NumberFormatException { Matcher m = idExtractionPattern.matcher(path); if (m.find()) { return Long.valueOf(m.group(1)); } else { throw new NumberFormatException("Couldn't find ledgerid in path"); } } public static String getParentZnodePath(String base, long ledgerId) { String subdir1 = String.format("%04x", ledgerId >> 48 & 0xffff); String subdir2 = String.format("%04x", ledgerId >> 32 & 0xffff); String subdir3 = String.format("%04x", ledgerId >> 16 & 0xffff); String subdir4 = String.format("%04x", ledgerId & 0xffff); return String.format("%s/%s/%s/%s/%s", base, subdir1, subdir2, subdir3, subdir4); } public static String getUrLedgerZnode(String base, long ledgerId) { return String.format("%s/urL%010d", getParentZnodePath(base, ledgerId), ledgerId); } private String getUrLedgerZnode(long ledgerId) { return getUrLedgerZnode(urLedgerPath, ledgerId); } @VisibleForTesting public UnderreplicatedLedgerFormat getLedgerUnreplicationInfo(long ledgerId) throws KeeperException, TextFormat.ParseException, InterruptedException { String znode = getUrLedgerZnode(ledgerId); UnderreplicatedLedgerFormat.Builder builder = UnderreplicatedLedgerFormat.newBuilder(); byte[] data = zkc.getData(znode, false, null); TextFormat.merge(new String(data, UTF_8), builder); return builder.build(); } @Override public void markLedgerUnderreplicated(long ledgerId, String missingReplica) throws ReplicationException.UnavailableException { LOG.debug("markLedgerUnderreplicated(ledgerId={}, missingReplica={})", ledgerId, missingReplica); try { String znode = getUrLedgerZnode(ledgerId); while (true) { UnderreplicatedLedgerFormat.Builder builder = UnderreplicatedLedgerFormat.newBuilder(); try { builder.addReplica(missingReplica); ZkUtils.createFullPathOptimistic(zkc, znode, TextFormat .printToString(builder.build()).getBytes(UTF_8), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException.NodeExistsException nee) { Stat s = zkc.exists(znode, false); if (s == null) { continue; } try { byte[] bytes = zkc.getData(znode, false, s); builder.clear(); TextFormat.merge(new String(bytes, UTF_8), builder); UnderreplicatedLedgerFormat data = builder.build(); if (data.getReplicaList().contains(missingReplica)) { return; // nothing to add } builder.addReplica(missingReplica); zkc.setData(znode, TextFormat.printToString(builder.build()).getBytes(UTF_8), s.getVersion()); } catch (KeeperException.NoNodeException nne) { continue; } catch (KeeperException.BadVersionException bve) { continue; } catch (TextFormat.ParseException pe) { throw new ReplicationException.UnavailableException( "Invalid data found", pe); } } break; } } catch (KeeperException ke) { throw new ReplicationException.UnavailableException("Error contacting zookeeper", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException("Interrupted while contacting zookeeper", ie); } } @Override public void markLedgerReplicated(long ledgerId) throws ReplicationException.UnavailableException { LOG.debug("markLedgerReplicated(ledgerId={})", ledgerId); try { Lock l = heldLocks.get(ledgerId); if (l != null) { zkc.delete(getUrLedgerZnode(ledgerId), l.getLedgerZNodeVersion()); try { // clean up the hierarchy String parts[] = getUrLedgerZnode(ledgerId).split("/"); for (int i = 1; i <= 4; i++) { String p[] = Arrays.copyOf(parts, parts.length - i); String path = Joiner.on("/").join(p); Stat s = zkc.exists(path, null); if (s != null) { zkc.delete(path, s.getVersion()); } } } catch (KeeperException.NotEmptyException nee) { // This can happen when cleaning up the hierarchy. // It's safe to ignore, it simply means another // ledger in the same hierarchy has been marked as // underreplicated. } } } catch (KeeperException.NoNodeException nne) { // this is ok } catch (KeeperException.BadVersionException bve) { // if this is the case, some has marked the ledger // for rereplication again. Leave the underreplicated // znode in place, so the ledger is checked. } catch (KeeperException ke) { LOG.error("Error deleting underreplicated ledger znode", ke); throw new ReplicationException.UnavailableException("Error contacting zookeeper", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException("Interrupted while contacting zookeeper", ie); } finally { releaseUnderreplicatedLedger(ledgerId); } } @Override public Iterator listLedgersToRereplicate() { final Queue queue = new LinkedList(); queue.add(urLedgerPath); return new Iterator() { final Queue curBatch = new LinkedList(); @Override public void remove() { throw new UnsupportedOperationException(); } @Override public boolean hasNext() { if (curBatch.size() > 0) { return true; } while (queue.size() > 0 && curBatch.size() == 0) { String parent = queue.remove(); try { for (String c : zkc.getChildren(parent,false)) { String child = parent + "/" + c; if (c.startsWith("urL")) { curBatch.add(getLedgerId(child)); } else { queue.add(child); } } } catch (InterruptedException ie) { Thread.currentThread().interrupt(); return false; } catch (KeeperException.NoNodeException nne) { // ignore } catch (Exception e) { throw new RuntimeException("Error reading list", e); } } return curBatch.size() > 0; } @Override public Long next() { assert curBatch.size() > 0; return curBatch.remove(); } }; } private long getLedgerToRereplicateFromHierarchy(String parent, long depth, Watcher w) throws KeeperException, InterruptedException { if (depth == 4) { List children; try { children = zkc.getChildren(parent, w); } catch (KeeperException.NoNodeException nne) { // can occur if another underreplicated ledger's // hierarchy is being cleaned up return -1; } Collections.shuffle(children); while (children.size() > 0) { String tryChild = children.get(0); try { String lockPath = urLockPath + "/" + tryChild; if (zkc.exists(lockPath, w) != null) { children.remove(tryChild); continue; } Stat stat = zkc.exists(parent + "/" + tryChild, false); if (stat == null) { LOG.debug("{}/{} doesn't exist", parent, tryChild); children.remove(tryChild); continue; } long ledgerId = getLedgerId(tryChild); zkc.create(lockPath, TextFormat.printToString(lockData).getBytes(UTF_8), Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL); heldLocks.put(ledgerId, new Lock(lockPath, stat.getVersion())); return ledgerId; } catch (KeeperException.NodeExistsException nee) { children.remove(tryChild); } catch (NumberFormatException nfe) { children.remove(tryChild); } } return -1; } List children; try { children = zkc.getChildren(parent, w); } catch (KeeperException.NoNodeException nne) { // can occur if another underreplicated ledger's // hierarchy is being cleaned up return -1; } Collections.shuffle(children); while (children.size() > 0) { String tryChild = children.get(0); String tryPath = parent + "/" + tryChild; long ledger = getLedgerToRereplicateFromHierarchy(tryPath, depth + 1, w); if (ledger != -1) { return ledger; } children.remove(tryChild); } return -1; } @Override public long pollLedgerToRereplicate() throws ReplicationException.UnavailableException { LOG.debug("pollLedgerToRereplicate()"); try { Watcher w = new Watcher() { public void process(WatchedEvent e) { // do nothing } }; return getLedgerToRereplicateFromHierarchy(urLedgerPath, 0, w); } catch (KeeperException ke) { throw new ReplicationException.UnavailableException("Error contacting zookeeper", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException("Interrupted while connecting zookeeper", ie); } } @Override public long getLedgerToRereplicate() throws ReplicationException.UnavailableException { LOG.debug("getLedgerToRereplicate()"); try { while (true) { waitIfLedgerReplicationDisabled(); final CountDownLatch changedLatch = new CountDownLatch(1); Watcher w = new Watcher() { public void process(WatchedEvent e) { if (e.getType() == Watcher.Event.EventType.NodeChildrenChanged || e.getType() == Watcher.Event.EventType.NodeDeleted || e.getType() == Watcher.Event.EventType.NodeCreated || e.getState() == Watcher.Event.KeeperState.Expired || e.getState() == Watcher.Event.KeeperState.Disconnected) { changedLatch.countDown(); } } }; long ledger = getLedgerToRereplicateFromHierarchy(urLedgerPath, 0, w); if (ledger != -1) { return ledger; } // nothing found, wait for a watcher to trigger changedLatch.await(); } } catch (KeeperException ke) { throw new ReplicationException.UnavailableException("Error contacting zookeeper", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException("Interrupted while connecting zookeeper", ie); } } private void waitIfLedgerReplicationDisabled() throws UnavailableException, InterruptedException { ReplicationEnableCb cb = new ReplicationEnableCb(); if (!this.isLedgerReplicationEnabled()) { this.notifyLedgerReplicationEnabled(cb); cb.await(); } } @Override public void releaseUnderreplicatedLedger(long ledgerId) throws ReplicationException.UnavailableException { LOG.debug("releaseLedger(ledgerId={})", ledgerId); try { Lock l = heldLocks.remove(ledgerId); if (l != null) { zkc.delete(l.getLockZNode(), -1); } } catch (KeeperException.NoNodeException nne) { // this is ok } catch (KeeperException ke) { LOG.error("Error deleting underreplicated ledger lock", ke); throw new ReplicationException.UnavailableException("Error contacting zookeeper", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException("Interrupted while connecting zookeeper", ie); } } @Override public void close() throws ReplicationException.UnavailableException { LOG.debug("close()"); try { for (Map.Entry e : heldLocks.entrySet()) { zkc.delete(e.getValue().getLockZNode(), -1); } } catch (KeeperException.NoNodeException nne) { // this is ok } catch (KeeperException ke) { LOG.error("Error deleting underreplicated ledger lock", ke); throw new ReplicationException.UnavailableException("Error contacting zookeeper", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException("Interrupted while connecting zookeeper", ie); } } @Override public void disableLedgerReplication() throws ReplicationException.UnavailableException { LOG.debug("disableLedegerReplication()"); try { String znode = basePath + '/' + BookKeeperConstants.DISABLE_NODE; zkc.create(znode, "".getBytes(UTF_8), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); LOG.info("Auto ledger re-replication is disabled!"); } catch (KeeperException.NodeExistsException ke) { LOG.warn("AutoRecovery is already disabled!", ke); throw new ReplicationException.UnavailableException( "AutoRecovery is already disabled!", ke); } catch (KeeperException ke) { LOG.error("Exception while stopping auto ledger re-replication", ke); throw new ReplicationException.UnavailableException( "Exception while stopping auto ledger re-replication", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException( "Interrupted while stopping auto ledger re-replication", ie); } } @Override public void enableLedgerReplication() throws ReplicationException.UnavailableException { LOG.debug("enableLedegerReplication()"); try { zkc.delete(basePath + '/' + BookKeeperConstants.DISABLE_NODE, -1); LOG.info("Resuming automatic ledger re-replication"); } catch (KeeperException.NoNodeException ke) { LOG.warn("AutoRecovery is already enabled!", ke); throw new ReplicationException.UnavailableException( "AutoRecovery is already enabled!", ke); } catch (KeeperException ke) { LOG.error("Exception while resuming ledger replication", ke); throw new ReplicationException.UnavailableException( "Exception while resuming auto ledger re-replication", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException( "Interrupted while resuming auto ledger re-replication", ie); } } @Override public boolean isLedgerReplicationEnabled() throws ReplicationException.UnavailableException { LOG.debug("isLedgerReplicationEnabled()"); try { if (null != zkc.exists(basePath + '/' + BookKeeperConstants.DISABLE_NODE, false)) { return false; } return true; } catch (KeeperException ke) { LOG.error("Error while checking the state of " + "ledger re-replication", ke); throw new ReplicationException.UnavailableException( "Error contacting zookeeper", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException( "Interrupted while contacting zookeeper", ie); } } @Override public void notifyLedgerReplicationEnabled(final GenericCallback cb) throws ReplicationException.UnavailableException { LOG.debug("notifyLedgerReplicationEnabled()"); Watcher w = new Watcher() { public void process(WatchedEvent e) { if (e.getType() == Watcher.Event.EventType.NodeDeleted) { cb.operationComplete(0, null); } } }; try { if (null == zkc.exists(basePath + '/' + BookKeeperConstants.DISABLE_NODE, w)) { cb.operationComplete(0, null); return; } } catch (KeeperException ke) { LOG.error("Error while checking the state of " + "ledger re-replication", ke); throw new ReplicationException.UnavailableException( "Error contacting zookeeper", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new ReplicationException.UnavailableException( "Interrupted while contacting zookeeper", ie); } } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/meta/ZkVersion.java000066400000000000000000000041351244507361200332430ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.meta; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Version.Occurred; public class ZkVersion implements Version { int znodeVersion; public ZkVersion(int version) { znodeVersion = version; } @Override public Occurred compare(Version v) { if (null == v) { throw new NullPointerException("Version is not allowed to be null."); } if (v == Version.NEW) { return Occurred.AFTER; } else if (v == Version.ANY) { return Occurred.CONCURRENTLY; } else if (!(v instanceof ZkVersion)) { throw new IllegalArgumentException("Invalid version type"); } ZkVersion zv = (ZkVersion)v; int res = znodeVersion - zv.znodeVersion; if (res == 0) { return Occurred.CONCURRENTLY; } else if (res < 0) { return Occurred.BEFORE; } else { return Occurred.AFTER; } } public int getZnodeVersion() { return znodeVersion; } public ZkVersion setZnodeVersion(int znodeVersion) { this.znodeVersion = znodeVersion; return this; } @Override public String toString() { return Integer.toString(znodeVersion, 10); } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/000077500000000000000000000000001244507361200315205ustar00rootroot00000000000000InMemoryMetaStore.java000066400000000000000000000041231244507361200356670ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.util.HashMap; import java.util.Map; import org.apache.commons.configuration.Configuration; public class InMemoryMetaStore implements MetaStore { static final int CUR_VERSION = 1; static Map tables = new HashMap(); // for test public static void reset() { tables.clear(); } @Override public String getName() { return getClass().getName(); } @Override public int getVersion() { return CUR_VERSION; } @Override public void init(Configuration conf, int msVersion) throws MetastoreException { // do nothing } @Override public void close() { // do nothing } @Override public MetastoreTable createTable(String name) { return createInMemoryTable(name); } @Override public MetastoreScannableTable createScannableTable(String name) { return createInMemoryTable(name); } private InMemoryMetastoreTable createInMemoryTable(String name) { InMemoryMetastoreTable t = tables.get(name); if (t == null) { t = new InMemoryMetastoreTable(this, name); tables.put(name, t); } return t; } } InMemoryMetastoreCursor.java000066400000000000000000000067161244507361200371370ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import static org.apache.bookkeeper.metastore.InMemoryMetastoreTable.cloneValue; import java.io.IOException; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; import java.util.SortedMap; import java.util.concurrent.ScheduledExecutorService; import org.apache.bookkeeper.metastore.MSException.Code; import org.apache.bookkeeper.versioning.Versioned; import com.google.common.collect.ImmutableSortedMap; class InMemoryMetastoreCursor implements MetastoreCursor { private final ScheduledExecutorService scheduler; private final Iterator>> iter; private final Set fields; public InMemoryMetastoreCursor(SortedMap> map, Set fields, ScheduledExecutorService scheduler) { // copy an map for iterator to avoid concurrent modification problem. this.iter = ImmutableSortedMap.copyOfSorted(map).entrySet().iterator(); this.fields = fields; this.scheduler = scheduler; } @Override public boolean hasMoreEntries() { return iter.hasNext(); } @Override public Iterator readEntries(int numEntries) throws MSException { if (numEntries < 0) { throw MSException.create(Code.IllegalOp); } return unsafeReadEntries(numEntries); } @Override public void asyncReadEntries(final int numEntries, final ReadEntriesCallback cb, final Object ctx) { scheduler.submit(new Runnable() { @Override public void run() { if (numEntries < 0) { cb.complete(Code.IllegalOp.getCode(), null, ctx); return; } Iterator result = unsafeReadEntries(numEntries); cb.complete(Code.OK.getCode(), result, ctx); } }); } private Iterator unsafeReadEntries(int numEntries) { List entries = new ArrayList(); int nCount = 0; while (iter.hasNext() && nCount < numEntries) { Map.Entry> entry = iter.next(); Versioned value = entry.getValue(); Versioned vv = cloneValue(value.getValue(), value.getVersion(), fields); String key = entry.getKey(); entries.add(new MetastoreTableItem(key, vv)); ++nCount; } return entries.iterator(); } @Override public void close() throws IOException { // do nothing } } InMemoryMetastoreTable.java000066400000000000000000000317451244507361200367110ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.util.NavigableMap; import java.util.Set; import java.util.TreeMap; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import org.apache.bookkeeper.metastore.MSException.Code; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; public class InMemoryMetastoreTable implements MetastoreScannableTable { public static class MetadataVersion implements Version { int version; public MetadataVersion(int v) { this.version = v; } public MetadataVersion(MetadataVersion v) { this.version = v.version; } public synchronized MetadataVersion incrementVersion() { ++version; return this; } @Override public Occurred compare(Version v) { if (null == v) { throw new NullPointerException("Version is not allowed to be null."); } if (v == Version.NEW) { return Occurred.AFTER; } else if (v == Version.ANY) { return Occurred.CONCURRENTLY; } else if (!(v instanceof MetadataVersion)) { throw new IllegalArgumentException("Invalid version type"); } MetadataVersion mv = (MetadataVersion)v; int res = version - mv.version; if (res == 0) { return Occurred.CONCURRENTLY; } else if (res < 0) { return Occurred.BEFORE; } else { return Occurred.AFTER; } } @Override public boolean equals(Object obj) { if (null == obj || !(obj instanceof MetadataVersion)) { return false; } MetadataVersion v = (MetadataVersion)obj; return 0 == (version - v.version); } @Override public String toString() { return "version=" + version; } @Override public int hashCode() { return version; } } private String name; private TreeMap> map = null; private TreeMap watcherMap = null; private ScheduledExecutorService scheduler; public InMemoryMetastoreTable(InMemoryMetaStore metastore, String name) { this.map = new TreeMap>(); this.watcherMap = new TreeMap(); this.name = name; this.scheduler = Executors.newSingleThreadScheduledExecutor(); } @Override public String getName () { return this.name; } static Versioned cloneValue(Value value, Version version, Set fields) { if (null != value) { Value newValue = new Value(); if (ALL_FIELDS == fields) { fields = value.getFields(); } for (String f : fields) { newValue.setField(f, value.getField(f)); } value = newValue; } if (null == version) { throw new NullPointerException("Version isn't allowed to be null."); } if (Version.ANY != version && Version.NEW != version) { if (version instanceof MetadataVersion) { version = new MetadataVersion(((MetadataVersion)version).version); } else { throw new IllegalStateException("Wrong version type."); } } return new Versioned(value, version); } @Override public void get(final String key, final MetastoreCallback> cb, final Object ctx) { scheduler.submit(new Runnable() { @Override public void run() { scheduleGet(key, ALL_FIELDS, cb, ctx); } }); } @Override public void get(final String key, final MetastoreWatcher watcher, final MetastoreCallback> cb, final Object ctx) { scheduler.submit(new Runnable() { @Override public void run() { scheduleGet(key, ALL_FIELDS, cb, ctx); synchronized(watcherMap) { watcherMap.put( key, watcher ); } } }); } @Override public void get(final String key, final Set fields, final MetastoreCallback> cb, final Object ctx) { scheduler.submit(new Runnable() { @Override public void run() { scheduleGet(key, fields, cb, ctx); } }); } public synchronized void scheduleGet(String key, Set fields, MetastoreCallback> cb, Object ctx) { if (null == key) { cb.complete(Code.IllegalOp.getCode(), null, ctx); return; } Versioned vv = get(key); int rc = null == vv ? Code.NoKey.getCode() : Code.OK.getCode(); if (vv != null) { vv = cloneValue(vv.getValue(), vv.getVersion(), fields); } cb.complete(rc, vv, ctx); } @Override public void put(final String key, final Value value, final Version version, final MetastoreCallback cb, final Object ctx) { scheduler.submit(new Runnable() { @Override public void run() { if (null == key || null == value || null == version) { cb.complete(Code.IllegalOp.getCode(), null, ctx); return; } Result result = put(key, value, version); cb.complete(result.code.getCode(), result.value, ctx); /* * If there is a watcher set for this key, we need * to trigger it. */ if(result.code == MSException.Code.OK){ triggerWatch(key, MSWatchedEvent.EventType.CHANGED); } } }); } @Override public void remove(final String key, final Version version, final MetastoreCallback cb, final Object ctx) { scheduler.submit(new Runnable() { @Override public void run() { if (null == key || null == version) { cb.complete(Code.IllegalOp.getCode(), null, ctx); return; } Code code = remove(key, version); cb.complete(code.getCode(), null, ctx); if(code == MSException.Code.OK){ triggerWatch(key, MSWatchedEvent.EventType.REMOVED); } } }); } @Override public void openCursor(MetastoreCallback cb, Object ctx) { openCursor(EMPTY_START_KEY, true, EMPTY_END_KEY, true, Order.ASC, ALL_FIELDS, cb, ctx); } @Override public void openCursor(Set fields, MetastoreCallback cb, Object ctx) { openCursor(EMPTY_START_KEY, true, EMPTY_END_KEY, true, Order.ASC, fields, cb, ctx); } @Override public void openCursor(String firstKey, boolean firstInclusive, String lastKey, boolean lastInclusive, Order order, MetastoreCallback cb, Object ctx) { openCursor(firstKey, firstInclusive, lastKey, lastInclusive, order, ALL_FIELDS, cb, ctx); } @Override public void openCursor(final String firstKey, final boolean firstInclusive, final String lastKey, final boolean lastInclusive, final Order order, final Set fields, final MetastoreCallback cb, final Object ctx) { scheduler.submit(new Runnable() { @Override public void run() { Result result = openCursor(firstKey, firstInclusive, lastKey, lastInclusive, order, fields); cb.complete(result.code.getCode(), result.value, ctx); } }); } private void triggerWatch(String key, MSWatchedEvent.EventType type) { synchronized(watcherMap){ if(watcherMap.containsKey( key )) { MSWatchedEvent event = new MSWatchedEvent(key, type); watcherMap.get( key ).process( event ); watcherMap.remove( key ); } } } private synchronized Versioned get(String key) { return map.get(key); } private synchronized Code remove(String key, Version version) { Versioned vv = map.get(key); if (null == vv) { return Code.NoKey; } if (Version.Occurred.CONCURRENTLY != vv.getVersion().compare(version)) { return Code.BadVersion; } map.remove(key); return Code.OK; } static class Result { Code code; T value; public Result(Code code, T value) { this.code = code; this.value = value; } } private synchronized Result put(String key, Value value, Version version) { Versioned vv = map.get(key); if (vv == null) { if (Version.NEW != version) { return new Result(Code.NoKey, null); } vv = cloneValue(value, version, ALL_FIELDS); vv.setVersion(new MetadataVersion(0)); map.put(key, vv); return new Result(Code.OK, new MetadataVersion(0)); } if (Version.NEW == version) { return new Result(Code.KeyExists, null); } if (Version.Occurred.CONCURRENTLY != vv.getVersion().compare(version)) { return new Result(Code.BadVersion, null); } vv.setVersion(((MetadataVersion)vv.getVersion()).incrementVersion()); vv.setValue(vv.getValue().merge(value)); return new Result(Code.OK, new MetadataVersion((MetadataVersion)vv.getVersion())); } private synchronized Result openCursor( String firstKey, boolean firstInclusive, String lastKey, boolean lastInclusive, Order order, Set fields) { if (0 == map.size()) { return new Result(Code.OK, MetastoreCursor.EMPTY_CURSOR); } boolean isLegalCursor = false; NavigableMap> myMap = null; if (Order.ASC == order) { myMap = map; if (EMPTY_END_KEY == lastKey || lastKey.compareTo(myMap.lastKey()) > 0) { lastKey = myMap.lastKey(); lastInclusive = true; } if (EMPTY_START_KEY == firstKey || firstKey.compareTo(myMap.firstKey()) < 0) { firstKey = myMap.firstKey(); firstInclusive = true; } if (firstKey.compareTo(lastKey) <= 0) { isLegalCursor = true; } } else if (Order.DESC == order) { myMap = map.descendingMap(); if (EMPTY_START_KEY == lastKey || lastKey.compareTo(myMap.lastKey()) < 0) { lastKey = myMap.lastKey(); lastInclusive = true; } if (EMPTY_END_KEY == firstKey || firstKey.compareTo(myMap.firstKey()) > 0) { firstKey = myMap.firstKey(); firstInclusive = true; } if (firstKey.compareTo(lastKey) >= 0) { isLegalCursor = true; } } if (!isLegalCursor || null == myMap) { return new Result(Code.IllegalOp, null); } MetastoreCursor cursor = new InMemoryMetastoreCursor( myMap.subMap(firstKey, firstInclusive, lastKey, lastInclusive), fields, scheduler); return new Result(Code.OK, cursor); } @Override public void close() { // do nothing } } MSException.java000066400000000000000000000151221244507361200345030ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.util.EnumSet; import java.util.HashMap; import java.util.Map; @SuppressWarnings("serial") public abstract class MSException extends Exception { /** * return codes */ public static enum Code { OK (0, "OK"), BadVersion (-1, "Version conflict"), NoKey (-2, "Key does not exist"), KeyExists (-3, "Key exists"), NoEntries (-4, "No entries found"), InterruptedException (-100, "Operation interrupted"), IllegalOp (-101, "Illegal operation"), ServiceDown (-102, "Metadata service is down"), OperationFailure(-103, "Operaion failed on metadata storage server side"); private static final Map codes = new HashMap(); static { for (Code c : EnumSet.allOf(Code.class)) { codes.put(c.code, c); } } private final int code; private final String description; private Code(int code, String description) { this.code = code; this.description = description; } /** * Get the int value for a particular Code. * * @return error code as integer */ public int getCode() { return code; } /** * Get the description for a particular Code. * * @return error description */ public String getDescription() { return description; } /** * Get the Code value for a particular integer error code. * * @param code int error code * @return Code value corresponding to specified int code, or null. */ public static Code get(int code) { return codes.get(code); } } private final Code code; MSException(Code code, String errMsg) { super(code.getDescription() + " : " + errMsg); this.code = code; } MSException(Code code, String errMsg, Throwable cause) { super(code.getDescription() + " : " + errMsg, cause); this.code = code; } public Code getCode() { return this.code; } public static MSException create(Code code) { return create(code, "", null); } public static MSException create(Code code, String errMsg) { return create(code, errMsg, null); } public static MSException create(Code code, String errMsg, Throwable cause) { switch (code) { case BadVersion: return new BadVersionException(errMsg, cause); case NoKey: return new NoKeyException(errMsg, cause); case KeyExists: return new KeyExistsException(errMsg, cause); case InterruptedException: return new MSInterruptedException(errMsg, cause); case IllegalOp: return new IllegalOpException(errMsg, cause); case ServiceDown: return new ServiceDownException(errMsg, cause); case OperationFailure: return new OperationFailureException(errMsg, cause); case OK: default: throw new IllegalArgumentException("Invalid exception code"); } } public static class BadVersionException extends MSException { public BadVersionException(String errMsg) { super(Code.BadVersion, errMsg); } public BadVersionException(String errMsg, Throwable cause) { super(Code.BadVersion, errMsg, cause); } } public static class NoKeyException extends MSException { public NoKeyException(String errMsg) { super(Code.NoKey, errMsg); } public NoKeyException(String errMsg, Throwable cause) { super(Code.NoKey, errMsg, cause); } } // Exception would be thrown in a cursor if no entries found public static class NoEntriesException extends MSException { public NoEntriesException(String errMsg) { super(Code.NoEntries, errMsg); } public NoEntriesException(String errMsg, Throwable cause) { super(Code.NoEntries, errMsg, cause); } } public static class KeyExistsException extends MSException { public KeyExistsException(String errMsg) { super(Code.KeyExists, errMsg); } public KeyExistsException(String errMsg, Throwable cause) { super(Code.KeyExists, errMsg, cause); } } public static class MSInterruptedException extends MSException { public MSInterruptedException(String errMsg) { super(Code.InterruptedException, errMsg); } public MSInterruptedException(String errMsg, Throwable cause) { super(Code.InterruptedException, errMsg, cause); } } public static class IllegalOpException extends MSException { public IllegalOpException(String errMsg) { super(Code.IllegalOp, errMsg); } public IllegalOpException(String errMsg, Throwable cause) { super(Code.IllegalOp, errMsg, cause); } } public static class ServiceDownException extends MSException { public ServiceDownException(String errMsg) { super(Code.ServiceDown, errMsg); } public ServiceDownException(String errMsg, Throwable cause) { super(Code.ServiceDown, errMsg, cause); } } public static class OperationFailureException extends MSException { public OperationFailureException(String errMsg) { super(Code.OperationFailure, errMsg); } public OperationFailureException(String errMsg, Throwable cause) { super(Code.OperationFailure, errMsg, cause); } } } MSWatchedEvent.java000066400000000000000000000022711244507361200351270ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; public class MSWatchedEvent { public enum EventType {CHANGED, REMOVED}; String key; EventType type; public MSWatchedEvent(String key, EventType type) { this.key = key; this.type = type; } public EventType getType() { return type; } public String getKey(){ return key; } } MetaStore.java000066400000000000000000000043451244507361200342150ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import org.apache.commons.configuration.Configuration; /** * Metadata Store Interface. */ public interface MetaStore { /** * Return the name of the plugin. * * @return the plugin name. */ public String getName(); /** * Get the plugin verison. * * @return the plugin version. */ public int getVersion(); /** * Initialize the meta store. * * @param config * Configuration object passed to metastore * @param msVersion * Version to initialize the metastore * @throws MetastoreException when failed to initialize */ public void init(Configuration config, int msVersion) throws MetastoreException; /** * Close the meta store. */ public void close(); /** * Create a metastore table. * * @param name * Table name. * @return a metastore table * @throws MetastoreException when failed to create the metastore table. */ public MetastoreTable createTable(String name) throws MetastoreException; /** * Create a scannable metastore table. * * @param name * Table name. * @return a metastore scannable table * @throws MetastoreException when failed to create the metastore table. */ public MetastoreScannableTable createScannableTable(String name) throws MetastoreException; } MetastoreCallback.java000066400000000000000000000017371244507361200356740ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; public interface MetastoreCallback { /** * @see MSException.Code */ public void complete(int rc, T value, Object ctx); } MetastoreCursor.java000066400000000000000000000054511244507361200354520ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.io.Closeable; import java.io.IOException; import java.util.Iterator; public interface MetastoreCursor extends Closeable { public static MetastoreCursor EMPTY_CURSOR = new MetastoreCursor() { @Override public boolean hasMoreEntries() { return false; } @Override public Iterator readEntries(int numEntries) throws MSException { throw new MSException.NoEntriesException("No entries left in the cursor."); } @Override public void asyncReadEntries(int numEntries, ReadEntriesCallback callback, Object ctx) { callback.complete(MSException.Code.NoEntries.getCode(), null, ctx); } @Override public void close() throws IOException { // do nothing } }; public static interface ReadEntriesCallback extends MetastoreCallback> { } /** * Is there any entries left in the cursor to read. * * @return true if there is entries left, false otherwise. */ public boolean hasMoreEntries(); /** * Read entries from the cursor, up to the specified numEntries. * The returned list can be smaller. * * @param numEntries * maximum number of entries to read * @return the iterator of returned entries. * @throws MSException when failed to read entries from the cursor. */ public Iterator readEntries(int numEntries) throws MSException; /** * Asynchronously read entries from the cursor, up to the specified numEntries. * * @see #readEntries(int) * @param numEntries * maximum number of entries to read * @param callback * callback object * @param ctx * opaque context */ public void asyncReadEntries(int numEntries, ReadEntriesCallback callback, Object ctx); } MetastoreException.java000066400000000000000000000022271244507361200361310ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; @SuppressWarnings("serial") public class MetastoreException extends Exception { public MetastoreException(String message) { super(message); } public MetastoreException(String message, Throwable t) { super(message, t); } public MetastoreException(Throwable t) { super(t); } } MetastoreFactory.java000066400000000000000000000023361244507361200356030ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import org.apache.bookkeeper.util.ReflectionUtils; public class MetastoreFactory { public static MetaStore createMetaStore(String name) throws MetastoreException { try { return ReflectionUtils.newInstance(name, MetaStore.class); } catch (Throwable t) { throw new MetastoreException("Failed to instantiate metastore : " + name); } } } MetastoreScannableTable.java000066400000000000000000000077511244507361200370400ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.util.Set; public interface MetastoreScannableTable extends MetastoreTable { // Used by cursor, etc when they want to start at the beginning of a table public static final String EMPTY_START_KEY = null; // Last row in a table. public static final String EMPTY_END_KEY = null; // the order to loop over a table public static enum Order { ASC, DESC } /** * Open a cursor to loop over the entries belonging to a key range, * which returns all fields for each entry. * *

* Return Code:
* {@link MSException.Code.OK}: an opened cursor
* {@link MSException.Code.IllegalOp}/{@link MSException.Code.ServiceDown}: * other issues *

* * @param firstKey * Key to start scanning. If it is {@link EMPTY_START_KEY}, it starts * from first key (inclusive). * @param firstInclusive * true if firstKey is to be included in the returned view. * @param lastKey * Key to stop scanning. If it is {@link EMPTY_END_KEY}, scan ends at * the lastKey of the table (inclusive). * @param lastInclusive * true if lastKey is to be included in the returned view. * @param order * the order to loop over the entries * @param cb * Callback to return an opened cursor. * @param ctx * Callback context */ public void openCursor(String firstKey, boolean firstInclusive, String lastKey, boolean lastInclusive, Order order, MetastoreCallback cb, Object ctx); /** * Open a cursor to loop over the entries belonging to a key range, * which returns the specified fields for each entry. * *

* Return Code:
* {@link MSException.Code.OK}: an opened cursor
* {@link MSException.Code.IllegalOp}/{@link MSException.Code.ServiceDown}: * other issues *

* * @param firstKey * Key to start scanning. If it is {@link EMPTY_START_KEY}, it starts * from first key (inclusive). * @param firstInclusive * true if firstKey is to be included in the returned view. * @param lastKey * Key to stop scanning. If it is {@link EMPTY_END_KEY}, scan ends at * the lastKey of the table (inclusive). * @param lastInclusive * true if lastKey is to be included in the returned view. * @param order * the order to loop over the entries * @param fields * Fields to select * @param cb * Callback to return an opened cursor. * @param ctx * Callback context */ public void openCursor(String firstKey, boolean firstInclusive, String lastKey, boolean lastInclusive, Order order, Set fields, MetastoreCallback cb, Object ctx); } MetastoreTable.java000066400000000000000000000155301244507361200352230ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.util.HashSet; import java.util.Set; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; public interface MetastoreTable { // select all fields when reading or scanning entries public static final Set ALL_FIELDS = null; // select non fields to return when reading/scanning entries public static final Set NON_FIELDS = new HashSet(); /** * Get table name. * * @return table name */ public String getName(); /** * Get all fields of a key. * *

* Return Code:

    *
  • {@link MSException.Code.OK}: success returning the key
  • *
  • {@link MSException.Code.NoKey}: no key found
  • *
  • {@link MSException.Code.IllegalOp}/{@link MSException.Code.ServiceDown}: other issues
  • *

* * @param key * Key Name * @param cb * Callback to return all fields of the key * @param ctx * Callback context */ public void get(String key, MetastoreCallback> cb, Object ctx); /** * Get all fields of a key. * *

* Return Code:

    *
  • {@link MSException.Code.OK}: success returning the key
  • *
  • {@link MSException.Code.NoKey}: no key found
  • *
  • {@link MSException.Code.IllegalOp}/{@link MSException.Code.ServiceDown}: other issues
  • *

* * @param key * Key Name * @param watcher * Watcher object to receive notifications * @param cb * Callback to return all fields of the key * @param ctx * Callback context */ public void get(String key, MetastoreWatcher watcher, MetastoreCallback> cb, Object ctx); /** * Get specified fields of a key. * *

* Return Code:

    *
  • {@link MSException.Code.OK}: success returning the key
  • *
  • {@link MSException.Code.NoKey}: no key found
  • *
  • {@link MSException.Code.IllegalOp}/{@link MSException.Code.ServiceDown}: other issues
  • *

* * @param key * Key Name * @param fields * Fields to return * @param cb * Callback to return specified fields of the key * @param ctx * Callback context */ public void get(String key, Set fields, MetastoreCallback> cb, Object ctx); /** * Update a key according to its version. * *

* Return Code:

    *
  • {@link MSException.Code.OK}: success updating the key
  • *
  • {@link MSException.Code.BadVersion}: failed to update the key due to bad version
  • *
  • {@link MSException.Code.NoKey}: no key found to update data, if not provided {@link Version.NEW}
  • *
  • {@link MSException.Code.KeyExists}: entry exists providing {@link Version.NEW}
  • *
  • {@link MSException.Code.IllegalOp}/{@link MSException.Code.ServiceDown}: other issues
  • *

* * The key is updated only when the version matches its current version. * In particular, if the provided version is:
    *
  • {@link Version.ANY}: update the data without comparing its version. * Note this usage is not encouraged since it may mess up data consistency.
  • *
  • {@link Version.NEW}: create the entry if it doesn't exist before; * Otherwise return {@link MSException.Code.KeyExists}.
  • *
* * @param key * Key Name * @param value * Value to update. * @param version * Version specified to update. * @param cb * Callback to return new version after updated. * @param ctx * Callback context */ public void put(String key, Value value, Version version, MetastoreCallback cb, Object ctx); /** * Remove a key by its version. * * The key is removed only when the version matches its current version. * If version is {@link Version.ANY}, the key would be removed directly. * *

* Return Code:

    *
  • {@link MSException.Code.OK}: success updating the key
  • *
  • {@link MSException.Code.NoKey}: if the key doesn't exist.
  • *
  • {@link MSException.Code.BadVersion}: failed to delete the key due to bad version
  • *
  • {@link MSException.Code.IllegalOp}/{@link MSException.Code.ServiceDown}: other issues
  • *

* * @param key * Key Name. * @param version * Version specified to remove. * @param cb * Callback to return all fields of the key * @param ctx * Callback context */ public void remove(String key, Version version, MetastoreCallback cb, Object ctx); /** * Open a cursor to loop over all the entries of the table, * which returns all fields for each entry. * The returned cursor doesn't need to guarantee any order, * since the underlying might be a hash table or an order table. * * @param cb * Callback to return an opened cursor * @param ctx * Callback context */ public void openCursor(MetastoreCallback cb, Object ctx); /** * Open a cursor to loop over all the entries of the table, * which returns the specified fields for each entry. * The returned cursor doesn't need to guarantee any order, * since the underlying might be a hash table or an order table. * * @param fields * Fields to select * @param cb * Callback to return an opened cursor * @param ctx * Callback context */ public void openCursor(Set fields, MetastoreCallback cb, Object ctx); /** * Close the table. */ public void close(); } MetastoreTableItem.java000066400000000000000000000034211244507361200360360ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import org.apache.bookkeeper.versioning.Versioned; /** * Identify an item in a metastore table. */ public class MetastoreTableItem { private String key; private Versioned value; public MetastoreTableItem(String key, Versioned value) { this.key = key; this.value = value; } /** * Get the key of the table item. * * @return key of table item. */ public String getKey() { return key; } /** * Set the key of the item. * * @param key Key */ public void setKey(String key) { this.key = key; } /** * Get the value of the item. * * @return value of the item. */ public Versioned getValue() { return value; } /** * Set the value of the item. * * @return value of the item. */ public void setValue(Versioned value) { this.value = value; } } MetastoreUtils.java000066400000000000000000000115041244507361200352710ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.metastore.MSException.Code; import org.apache.bookkeeper.versioning.Version; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Provides utilities for metastore. */ public class MetastoreUtils { protected final static Logger logger = LoggerFactory.getLogger(MetastoreUtils.class); static class MultiMetastoreCallback implements MetastoreCallback { int rc = Code.OK.getCode(); final int numOps; final AtomicInteger numFinished = new AtomicInteger(0); final CountDownLatch doneLatch = new CountDownLatch(1); MultiMetastoreCallback(int numOps) { this.numOps = numOps; } @Override public void complete(int rc, T value, Object ctx) { if (Code.OK.getCode() != rc) { this.rc = rc; doneLatch.countDown(); return; } if (numFinished.incrementAndGet() == numOps) { doneLatch.countDown(); } } public void waitUntilAllFinished() throws MSException, InterruptedException { doneLatch.await(); if (Code.OK.getCode() != rc) { throw MSException.create(Code.get(rc)); } } } static class SyncMetastoreCallback implements MetastoreCallback { int rc; T result; final CountDownLatch doneLatch = new CountDownLatch(1); @Override public void complete(int rc, T value, Object ctx) { this.rc = rc; result = value; doneLatch.countDown(); } public T getResult() throws MSException, InterruptedException { doneLatch.await(); if (Code.OK.getCode() != rc) { throw MSException.create(Code.get(rc)); } return result; } } /** * Clean the given table. * * @param table * Metastore Table. * @param numEntriesPerScan * Num entries per scan. * @throws MSException * @throws InterruptedException */ public static void cleanTable(MetastoreTable table, int numEntriesPerScan) throws MSException, InterruptedException { // open cursor SyncMetastoreCallback openCb = new SyncMetastoreCallback(); table.openCursor(MetastoreTable.NON_FIELDS, openCb, null); MetastoreCursor cursor = openCb.getResult(); logger.info("Open cursor for table {} to clean entries.", table.getName()); List keysToClean = new ArrayList(numEntriesPerScan); int numEntriesRemoved = 0; while (cursor.hasMoreEntries()) { logger.info("Fetching next {} entries from table {} to clean.", numEntriesPerScan, table.getName()); Iterator iter = cursor.readEntries(numEntriesPerScan); keysToClean.clear(); while (iter.hasNext()) { MetastoreTableItem item = iter.next(); String key = item.getKey(); keysToClean.add(key); } if (keysToClean.isEmpty()) { continue; } logger.info("Issuing deletes to delete keys {}", keysToClean); // issue deletes to delete batch of keys MultiMetastoreCallback mcb = new MultiMetastoreCallback(keysToClean.size()); for (String key : keysToClean) { table.remove(key, Version.ANY, mcb, null); } mcb.waitUntilAllFinished(); numEntriesRemoved += keysToClean.size(); logger.info("Removed {} entries from table {}.", numEntriesRemoved, table.getName()); } logger.info("Finished cleaning up table {}.", table.getName()); } } MetastoreWatcher.java000066400000000000000000000016421244507361200355700ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; public interface MetastoreWatcher { public void process(MSWatchedEvent e); } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/metastore/Value.java000066400000000000000000000103401244507361200334350ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import com.google.common.primitives.UnsignedBytes; import com.google.common.hash.Hasher; import com.google.common.hash.HashFunction; import com.google.common.hash.Hashing; import java.util.Comparator; import java.util.HashMap; import java.util.Map; import java.util.Set; import java.util.Collections; import static org.apache.bookkeeper.metastore.MetastoreTable.ALL_FIELDS; public class Value { static final Comparator comparator = UnsignedBytes.lexicographicalComparator(); protected Map fields; public Value() { fields = new HashMap(); } public Value(Value v) { fields = new HashMap(v.fields); } public byte[] getField(String field) { return fields.get(field); } public Value setField(String field, byte[] data) { fields.put(field, data); return this; } public Value clearFields() { fields.clear(); return this; } public Set getFields() { return fields.keySet(); } public Map getFieldsMap() { return Collections.unmodifiableMap(fields); } /** * Select parts of fields. * * @param fields * Parts of fields * @return new value with specified fields */ public Value project(Set fields) { if (ALL_FIELDS == fields) { return new Value(this); } Value v = new Value(); for (String f : fields) { byte[] data = this.fields.get(f); v.setField(f, data); } return v; } @Override public int hashCode() { HashFunction hf = Hashing.murmur3_32(); Hasher hc = hf.newHasher(); for (String key : fields.keySet()) { hc.putString(key); } return hc.hash().asInt(); } @Override public boolean equals(Object o) { if (!(o instanceof Value)) { return false; } Value other = (Value) o; if (fields.size() != other.fields.size()) { return false; } for (String f : fields.keySet()) { byte[] v1 = fields.get(f); byte[] v2 = other.fields.get(f); if (0 != comparator.compare(v1, v2)) { return false; } } return true; } /** * Merge other value. * * @param other * Other Value */ public Value merge(Value other) { for (Map.Entry entry : other.fields.entrySet()) { if (null == entry.getValue()) { fields.remove(entry.getKey()); } else { fields.put(entry.getKey(), entry.getValue()); } } return this; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("["); for (Map.Entry entry : fields.entrySet()) { String f = entry.getKey(); if (null == f) { f = "NULL"; } String value; if (null == entry.getValue()) { value = "NONE"; } else { value = new String(entry.getValue()); } sb.append("('").append(f).append("'=").append(value).append(")"); } sb.append("]"); return sb.toString(); } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/net/000077500000000000000000000000001244507361200303035ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/net/DNS.java000066400000000000000000000326671244507361200316100ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // This code has been copied from hadoop-common 2.0.4-alpha package org.apache.bookkeeper.net; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.net.InetAddress; import java.net.NetworkInterface; import java.net.SocketException; import java.net.UnknownHostException; import java.util.Collections; import java.util.Enumeration; import java.util.LinkedHashSet; import java.util.Vector; import javax.naming.NamingException; import javax.naming.directory.Attributes; import javax.naming.directory.DirContext; import javax.naming.directory.InitialDirContext; /** * * A class that provides direct and reverse lookup functionalities, allowing * the querying of specific network interfaces or nameservers. * * */ public class DNS { private static final Logger LOG = LoggerFactory.getLogger(DNS.class); /** * The cached hostname -initially null. */ private static final String cachedHostname = resolveLocalHostname(); private static final String cachedHostAddress = resolveLocalHostIPAddress(); private static final String LOCALHOST = "localhost"; /** * Returns the hostname associated with the specified IP address by the * provided nameserver. * * Loopback addresses * @param hostIp The address to reverse lookup * @param ns The host name of a reachable DNS server * @return The host name associated with the provided IP * @throws NamingException If a NamingException is encountered */ public static String reverseDns(InetAddress hostIp, String ns) throws NamingException { // // Builds the reverse IP lookup form // This is formed by reversing the IP numbers and appending in-addr.arpa // String[] parts = hostIp.getHostAddress().split("\\."); String reverseIP = parts[3] + "." + parts[2] + "." + parts[1] + "." + parts[0] + ".in-addr.arpa"; DirContext ictx = new InitialDirContext(); Attributes attribute; try { attribute = ictx.getAttributes("dns://" // Use "dns:///" if the default + ((ns == null) ? "" : ns) + // nameserver is to be used "/" + reverseIP, new String[] { "PTR" }); } finally { ictx.close(); } return attribute.get("PTR").get().toString(); } /** * @return NetworkInterface for the given subinterface name (eg eth0:0) * or null if no interface with the given name can be found */ private static NetworkInterface getSubinterface(String strInterface) throws SocketException { Enumeration nifs = NetworkInterface.getNetworkInterfaces(); while (nifs.hasMoreElements()) { Enumeration subNifs = nifs.nextElement().getSubInterfaces(); while (subNifs.hasMoreElements()) { NetworkInterface nif = subNifs.nextElement(); if (nif.getName().equals(strInterface)) { return nif; } } } return null; } /** * @param nif network interface to get addresses for * @return set containing addresses for each subinterface of nif, * see below for the rationale for using an ordered set */ private static LinkedHashSet getSubinterfaceInetAddrs( NetworkInterface nif) { LinkedHashSet addrs = new LinkedHashSet(); Enumeration subNifs = nif.getSubInterfaces(); while (subNifs.hasMoreElements()) { NetworkInterface subNif = subNifs.nextElement(); addrs.addAll(Collections.list(subNif.getInetAddresses())); } return addrs; } /** * Like {@link DNS#getIPs(String, boolean), but returns all * IPs associated with the given interface and its subinterfaces. */ public static String[] getIPs(String strInterface) throws UnknownHostException { return getIPs(strInterface, true); } /** * Returns all the IPs associated with the provided interface, if any, in * textual form. * * @param strInterface * The name of the network interface or sub-interface to query * (eg eth0 or eth0:0) or the string "default" * @param returnSubinterfaces * Whether to return IPs associated with subinterfaces of * the given interface * @return A string vector of all the IPs associated with the provided * interface. The local host IP is returned if the interface * name "default" is specified or there is an I/O error looking * for the given interface. * @throws UnknownHostException * If the given interface is invalid * */ public static String[] getIPs(String strInterface, boolean returnSubinterfaces) throws UnknownHostException { if ("default".equals(strInterface)) { return new String[] { cachedHostAddress }; } NetworkInterface netIf; try { netIf = NetworkInterface.getByName(strInterface); if (netIf == null) { netIf = getSubinterface(strInterface); } } catch (SocketException e) { LOG.warn("I/O error finding interface " + strInterface + ": " + e.getMessage()); return new String[] { cachedHostAddress }; } if (netIf == null) { throw new UnknownHostException("No such interface " + strInterface); } // NB: Using a LinkedHashSet to preserve the order for callers // that depend on a particular element being 1st in the array. // For example, getDefaultIP always returns the first element. LinkedHashSet allAddrs = new LinkedHashSet(); allAddrs.addAll(Collections.list(netIf.getInetAddresses())); if (!returnSubinterfaces) { allAddrs.removeAll(getSubinterfaceInetAddrs(netIf)); } String ips[] = new String[allAddrs.size()]; int i = 0; for (InetAddress addr : allAddrs) { ips[i++] = addr.getHostAddress(); } return ips; } /** * Returns the first available IP address associated with the provided * network interface or the local host IP if "default" is given. * * @param strInterface * The name of the network interface or subinterface to query * (e.g. eth0 or eth0:0) or the string "default" * @return The IP address in text form, the local host IP is returned * if the interface name "default" is specified * @throws UnknownHostException * If the given interface is invalid */ public static String getDefaultIP(String strInterface) throws UnknownHostException { String[] ips = getIPs(strInterface); return ips[0]; } /** * Returns all the host names associated by the provided nameserver with the * address bound to the specified network interface * * @param strInterface * The name of the network interface or subinterface to query * (e.g. eth0 or eth0:0) * @param nameserver * The DNS host name * @return A string vector of all host names associated with the IPs tied to * the specified interface * @throws UnknownHostException if the given interface is invalid */ public static String[] getHosts(String strInterface, String nameserver) throws UnknownHostException { String[] ips = getIPs(strInterface); Vector hosts = new Vector(); for (int ctr = 0; ctr < ips.length; ctr++) { try { hosts.add(reverseDns(InetAddress.getByName(ips[ctr]), nameserver)); } catch (UnknownHostException ignored) { } catch (NamingException ignored) { } } if (hosts.isEmpty()) { LOG.warn("Unable to determine hostname for interface " + strInterface); return new String[] { cachedHostname }; } else { return hosts.toArray(new String[hosts.size()]); } } /** * Determine the local hostname; retrieving it from cache if it is known * If we cannot determine our host name, return "localhost" * @return the local hostname or "localhost" */ private static String resolveLocalHostname() { String localhost; try { localhost = InetAddress.getLocalHost().getCanonicalHostName(); } catch (UnknownHostException e) { LOG.warn("Unable to determine local hostname " + "-falling back to \"" + LOCALHOST + "\"", e); localhost = LOCALHOST; } return localhost; } /** * Get the IPAddress of the local host as a string. * This will be a loop back value if the local host address cannot be * determined. * If the loopback address of "localhost" does not resolve, then the system's * network is in such a state that nothing is going to work. A message is * logged at the error level and a null pointer returned, a pointer * which will trigger failures later on the application * @return the IPAddress of the local host or null for a serious problem. */ private static String resolveLocalHostIPAddress() { String address; try { address = InetAddress.getLocalHost().getHostAddress(); } catch (UnknownHostException e) { LOG.warn("Unable to determine address of the host" + "-falling back to \"" + LOCALHOST + "\" address", e); try { address = InetAddress.getByName(LOCALHOST).getHostAddress(); } catch (UnknownHostException noLocalHostAddressException) { //at this point, deep trouble LOG.error("Unable to determine local loopback address " + "of \"" + LOCALHOST + "\" " + "-this system's network configuration is unsupported", e); address = null; } } return address; } /** * Returns all the host names associated by the default nameserver with the * address bound to the specified network interface * * @param strInterface * The name of the network interface to query (e.g. eth0) * @return The list of host names associated with IPs bound to the network * interface * @throws UnknownHostException * If one is encountered while querying the default interface * */ public static String[] getHosts(String strInterface) throws UnknownHostException { return getHosts(strInterface, null); } /** * Returns the default (first) host name associated by the provided * nameserver with the address bound to the specified network interface * * @param strInterface * The name of the network interface to query (e.g. eth0) * @param nameserver * The DNS host name * @return The default host names associated with IPs bound to the network * interface * @throws UnknownHostException * If one is encountered while querying the default interface */ public static String getDefaultHost(String strInterface, String nameserver) throws UnknownHostException { if ("default".equals(strInterface)) { return cachedHostname; } if ("default".equals(nameserver)) { return getDefaultHost(strInterface); } String[] hosts = getHosts(strInterface, nameserver); return hosts[0]; } /** * Returns the default (first) host name associated by the default * nameserver with the address bound to the specified network interface * * @param strInterface * The name of the network interface to query (e.g. eth0). * Must not be null. * @return The default host name associated with IPs bound to the network * interface * @throws UnknownHostException * If one is encountered while querying the default interface */ public static String getDefaultHost(String strInterface) throws UnknownHostException { return getDefaultHost(strInterface, null); } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/000077500000000000000000000000001244507361200306605ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/BKStats.java000066400000000000000000000171001244507361200330350ustar00rootroot00000000000000/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.proto; import java.beans.ConstructorProperties; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Bookie Server Stats */ public class BKStats { private static final Logger LOG = LoggerFactory.getLogger(BKStats.class); private static BKStats instance = new BKStats(); public static BKStats getInstance() { return instance; } /** * A read view of stats, also used in CompositeViewData to expose to JMX */ public static class OpStatData { private final long maxLatency, minLatency; private final double avgLatency; private final long numSuccessOps, numFailedOps; private final String latencyHist; @ConstructorProperties({"maxLatency", "minLatency", "avgLatency", "numSuccessOps", "numFailedOps", "latencyHist"}) public OpStatData(long maxLatency, long minLatency, double avgLatency, long numSuccessOps, long numFailedOps, String latencyHist) { this.maxLatency = maxLatency; this.minLatency = minLatency == Long.MAX_VALUE ? 0 : minLatency; this.avgLatency = avgLatency; this.numSuccessOps = numSuccessOps; this.numFailedOps = numFailedOps; this.latencyHist = latencyHist; } public long getMaxLatency() { return maxLatency; } public long getMinLatency() { return minLatency; } public double getAvgLatency() { return avgLatency; } public long getNumSuccessOps() { return numSuccessOps; } public long getNumFailedOps() { return numFailedOps; } public String getLatencyHist() { return latencyHist; } } /** * Operation Statistics */ public static class OpStats { static final int NUM_BUCKETS = 3*9 + 2; long maxLatency = 0; long minLatency = Long.MAX_VALUE; double totalLatency = 0.0f; long numSuccessOps = 0; long numFailedOps = 0; long[] latencyBuckets = new long[NUM_BUCKETS]; OpStats() {} /** * Increment number of failed operations */ synchronized public void incrementFailedOps() { ++numFailedOps; } /** * Update Latency */ synchronized public void updateLatency(long latency) { if (latency < 0) { // less than 0ms . Ideally this should not happen. // We have seen this latency negative in some cases due to the // behaviors of JVM. Ignoring the statistics updation for such // cases. LOG.warn("Latency time coming negative"); return; } totalLatency += latency; ++numSuccessOps; if (latency < minLatency) { minLatency = latency; } if (latency > maxLatency) { maxLatency = latency; } int bucket; if (latency <= 100) { // less than 100ms bucket = (int)(latency / 10); } else if (latency <= 1000) { // 100ms ~ 1000ms bucket = 1 * 9 + (int)(latency / 100); } else if (latency <= 10000) { // 1s ~ 10s bucket = 2 * 9 + (int)(latency / 1000); } else { // more than 10s bucket = 3 * 9 + 1; } ++latencyBuckets[bucket]; } public OpStatData toOpStatData() { double avgLatency = numSuccessOps > 0 ? totalLatency / numSuccessOps : 0.0f; StringBuilder sb = new StringBuilder(); for (int i=0; i base.maxLatency ? this.maxLatency : base.maxLatency; diff.minLatency = this.minLatency > base.minLatency ? base.minLatency : this.minLatency; diff.totalLatency = this.totalLatency - base.totalLatency; diff.numSuccessOps = this.numSuccessOps - base.numSuccessOps; diff.numFailedOps = this.numFailedOps - base.numFailedOps; for (int i = 0; i < NUM_BUCKETS; i++) { diff.latencyBuckets[i] = this.latencyBuckets[i] - base.latencyBuckets[i]; } return diff; } /** * Copy stats from other OpStats * * @param other other op stats * @return void */ public synchronized void copyOf(OpStats other) { this.maxLatency = other.maxLatency; this.minLatency = other.minLatency; this.totalLatency = other.totalLatency; this.numSuccessOps = other.numSuccessOps; this.numFailedOps = other.numFailedOps; System.arraycopy(other.latencyBuckets, 0, this.latencyBuckets, 0, this.latencyBuckets.length); } } public static final int STATS_ADD = 0; public static final int STATS_READ = 1; public static final int STATS_UNKNOWN = 2; // NOTE: if add other stats, increment NUM_STATS public static final int NUM_STATS = 3; OpStats[] stats = new OpStats[NUM_STATS]; private BKStats() { for (int i=0; i channels = new ConcurrentHashMap(); final ScheduledExecutorService timeoutExecutor = Executors.newSingleThreadScheduledExecutor(); private final ClientConfiguration conf; private volatile boolean closed; private final ReentrantReadWriteLock closeLock; public BookieClient(ClientConfiguration conf, ClientSocketChannelFactory channelFactory, OrderedSafeExecutor executor) { this.conf = conf; this.channelFactory = channelFactory; this.executor = executor; this.closed = false; this.closeLock = new ReentrantReadWriteLock(); } public PerChannelBookieClient lookupClient(InetSocketAddress addr) { PerChannelBookieClient channel = channels.get(addr); if (channel == null) { closeLock.readLock().lock(); try { if (closed) { return null; } channel = new PerChannelBookieClient(conf, executor, channelFactory, addr, totalBytesOutstanding, timeoutExecutor); PerChannelBookieClient prevChannel = channels.putIfAbsent(addr, channel); if (prevChannel != null) { channel = prevChannel; } } finally { closeLock.readLock().unlock(); } } return channel; } public void closeClients(Set addrs) { final HashSet clients = new HashSet(); for (InetSocketAddress a : addrs) { PerChannelBookieClient c = channels.get(a); if (c != null) { clients.add(c); } } if (clients.size() == 0) { return; } executor.submit(new SafeRunnable() { @Override public void safeRun() { for (PerChannelBookieClient c : clients) { c.disconnect(); } } }); } public void addEntry(final InetSocketAddress addr, final long ledgerId, final byte[] masterKey, final long entryId, final ChannelBuffer toSend, final WriteCallback cb, final Object ctx, final int options) { final PerChannelBookieClient client = lookupClient(addr); if (client == null) { cb.writeComplete(BKException.Code.BookieHandleNotAvailableException, ledgerId, entryId, addr, ctx); return; } client.connectIfNeededAndDoOp(new GenericCallback() { @Override public void operationComplete(final int rc, Void result) { if (rc != BKException.Code.OK) { executor.submitOrdered(ledgerId, new SafeRunnable() { @Override public void safeRun() { cb.writeComplete(rc, ledgerId, entryId, addr, ctx); } }); return; } client.addEntry(ledgerId, masterKey, entryId, toSend, cb, ctx, options); } }); } public void readEntryAndFenceLedger(final InetSocketAddress addr, final long ledgerId, final byte[] masterKey, final long entryId, final ReadEntryCallback cb, final Object ctx) { final PerChannelBookieClient client = lookupClient(addr); if (client == null) { cb.readEntryComplete(BKException.Code.BookieHandleNotAvailableException, ledgerId, entryId, null, ctx); return; } client.connectIfNeededAndDoOp(new GenericCallback() { @Override public void operationComplete(final int rc, Void result) { if (rc != BKException.Code.OK) { executor.submitOrdered(ledgerId, new SafeRunnable() { @Override public void safeRun() { cb.readEntryComplete(rc, ledgerId, entryId, null, ctx); } }); return; } client.readEntryAndFenceLedger(ledgerId, masterKey, entryId, cb, ctx); } }); } public void readEntry(final InetSocketAddress addr, final long ledgerId, final long entryId, final ReadEntryCallback cb, final Object ctx) { final PerChannelBookieClient client = lookupClient(addr); if (client == null) { cb.readEntryComplete(BKException.Code.BookieHandleNotAvailableException, ledgerId, entryId, null, ctx); return; } client.connectIfNeededAndDoOp(new GenericCallback() { @Override public void operationComplete(final int rc, Void result) { if (rc != BKException.Code.OK) { executor.submitOrdered(ledgerId, new SafeRunnable() { @Override public void safeRun() { cb.readEntryComplete(rc, ledgerId, entryId, null, ctx); } }); return; } client.readEntry(ledgerId, entryId, cb, ctx); } }); } public void close() { closeLock.writeLock().lock(); try { closed = true; for (PerChannelBookieClient channel: channels.values()) { channel.close(); } channels.clear(); } finally { closeLock.writeLock().unlock(); } } private static class Counter { int i; int total; synchronized void inc() { i++; total++; } synchronized void dec() { i--; notifyAll(); } synchronized void wait(int limit) throws InterruptedException { while (i > limit) { wait(); } } synchronized int total() { return total; } } /** * @param args * @throws IOException * @throws NumberFormatException * @throws InterruptedException */ public static void main(String[] args) throws NumberFormatException, IOException, InterruptedException { if (args.length != 3) { System.err.println("USAGE: BookieClient bookieHost port ledger#"); return; } WriteCallback cb = new WriteCallback() { public void writeComplete(int rc, long ledger, long entry, InetSocketAddress addr, Object ctx) { Counter counter = (Counter) ctx; counter.dec(); if (rc != 0) { System.out.println("rc = " + rc + " for " + entry + "@" + ledger); } } }; Counter counter = new Counter(); byte hello[] = "hello".getBytes(); long ledger = Long.parseLong(args[2]); ClientSocketChannelFactory channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors .newCachedThreadPool()); OrderedSafeExecutor executor = new OrderedSafeExecutor(1); BookieClient bc = new BookieClient(new ClientConfiguration(), channelFactory, executor); InetSocketAddress addr = new InetSocketAddress(args[0], Integer.parseInt(args[1])); for (int i = 0; i < 100000; i++) { counter.inc(); bc.addEntry(addr, ledger, new byte[0], i, ChannelBuffers.wrappedBuffer(hello), cb, counter, 0); } counter.wait(0); System.out.println("Total = " + counter.total()); channelFactory.releaseExternalResources(); executor.shutdown(); } } BookieProtocol.java000066400000000000000000000126261244507361200344050ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/protopackage org.apache.bookkeeper.proto; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ /** * The packets of the Bookie protocol all have a 4-byte integer indicating the * type of request or response at the very beginning of the packet followed by a * payload. * */ public interface BookieProtocol { /** * Lowest protocol version which will work with the bookie. */ public static final byte LOWEST_COMPAT_PROTOCOL_VERSION = 0; /** * Current version of the protocol, which client will use. */ public static final byte CURRENT_PROTOCOL_VERSION = 2; /** * Entry Entry ID. To be used when no valid entry id can be assigned. */ public static final long INVALID_ENTRY_ID = -1; /** * Entry identifier representing a request to obtain the last add entry confirmed */ public static final long LAST_ADD_CONFIRMED = -1; /** * The length of the master key in add packets. This * is fixed at 20 for historic reasons. This is because it * is always generated using the MacDigestManager regardless * of whether Mac is being used for the digest or not */ public static final int MASTER_KEY_LENGTH = 20; /** * The first int of a packet is the header. * It contains the version, opCode and flags. * The initial versions of BK didn't have this structure * and just had an int representing the opCode as the * first int. This handles that case also. */ static class PacketHeader { final byte version; final byte opCode; final short flags; public PacketHeader(byte version, byte opCode, short flags) { this.version = version; this.opCode = opCode; this.flags = flags; } int toInt() { if (version == 0) { return (int)opCode; } else { return ((version & 0xFF) << 24) | ((opCode & 0xFF) << 16) | (flags & 0xFFFF); } } static PacketHeader fromInt(int i) { byte version = (byte)(i >> 24); byte opCode = 0; short flags = 0; if (version == 0) { opCode = (byte)i; } else { opCode = (byte)((i >> 16) & 0xFF); flags = (short)(i & 0xFFFF); } return new PacketHeader(version, opCode, flags); } byte getVersion() { return version; } byte getOpCode() { return opCode; } short getFlags() { return flags; } } /** * The Add entry request payload will be a ledger entry exactly as it should * be logged. The response payload will be a 4-byte integer that has the * error code followed by the 8-byte ledger number and 8-byte entry number * of the entry written. */ public static final byte ADDENTRY = 1; /** * The Read entry request payload will be the ledger number and entry number * to read. (The ledger number is an 8-byte integer and the entry number is * a 8-byte integer.) The response payload will be a 4-byte integer * representing an error code and a ledger entry if the error code is EOK, * otherwise it will be the 8-byte ledger number and the 4-byte entry number * requested. (Note that the first sixteen bytes of the entry happen to be * the ledger number and entry number as well.) */ public static final byte READENTRY = 2; /** * The error code that indicates success */ public static final int EOK = 0; /** * The error code that indicates that the ledger does not exist */ public static final int ENOLEDGER = 1; /** * The error code that indicates that the requested entry does not exist */ public static final int ENOENTRY = 2; /** * The error code that indicates an invalid request type */ public static final int EBADREQ = 100; /** * General error occurred at the server */ public static final int EIO = 101; /** * Unauthorized access to ledger */ public static final int EUA = 102; /** * The server version is incompatible with the client */ public static final int EBADVERSION = 103; /** * Attempt to write to fenced ledger */ public static final int EFENCED = 104; /** * The server is running as read-only mode */ public static final int EREADONLY = 105; public static final short FLAG_NONE = 0x0; public static final short FLAG_DO_FENCING = 0x0001; public static final short FLAG_RECOVERY_ADD = 0x0002; } BookieServer.java000066400000000000000000000574741244507361200340640ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.proto; import java.io.File; import java.io.IOException; import java.net.InetSocketAddress; import java.net.MalformedURLException; import java.net.UnknownHostException; import java.nio.ByteBuffer; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import org.apache.zookeeper.KeeperException; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.bookie.ExitCode; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.jmx.BKMBeanRegistry; import org.apache.bookkeeper.proto.NIOServerFactory.Cnxn; import org.apache.bookkeeper.replication.AutoRecoveryMain; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.bookkeeper.util.MathUtils; import com.google.common.annotations.VisibleForTesting; import static org.apache.bookkeeper.proto.BookieProtocol.PacketHeader; import org.apache.commons.configuration.ConfigurationException; import org.apache.commons.cli.BasicParser; import org.apache.commons.cli.Options; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.ParseException; import org.apache.commons.codec.binary.Hex; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Implements the server-side part of the BookKeeper protocol. * */ public class BookieServer implements NIOServerFactory.PacketProcessor, BookkeeperInternalCallbacks.WriteCallback { final ServerConfiguration conf; NIOServerFactory nioServerFactory; private volatile boolean running = false; Bookie bookie; DeathWatcher deathWatcher; static Logger LOG = LoggerFactory.getLogger(BookieServer.class); // operation stats final BKStats bkStats = BKStats.getInstance(); final boolean isStatsEnabled; protected BookieServerBean jmxBkServerBean; AutoRecoveryMain autoRecoveryMain = null; private boolean isAutoRecoveryDaemonEnabled; public BookieServer(ServerConfiguration conf) throws IOException, KeeperException, InterruptedException, BookieException, UnavailableException, CompatibilityException { this.conf = conf; this.bookie = newBookie(conf); isAutoRecoveryDaemonEnabled = conf.isAutoRecoveryDaemonEnabled(); if (isAutoRecoveryDaemonEnabled) { this.autoRecoveryMain = new AutoRecoveryMain(conf); } isStatsEnabled = conf.isStatisticsEnabled(); } protected Bookie newBookie(ServerConfiguration conf) throws IOException, KeeperException, InterruptedException, BookieException { return new Bookie(conf); } public void start() throws IOException, UnavailableException { nioServerFactory = new NIOServerFactory(conf, this); this.bookie.start(); // fail fast, when bookie startup is not successful if (!this.bookie.isRunning()) { return; } if (isAutoRecoveryDaemonEnabled && this.autoRecoveryMain != null) { this.autoRecoveryMain.start(); } nioServerFactory.start(); running = true; deathWatcher = new DeathWatcher(conf); deathWatcher.start(); // register jmx registerJMX(); } @VisibleForTesting public InetSocketAddress getLocalAddress() { try { return Bookie.getBookieAddress(conf); } catch (UnknownHostException uhe) { return nioServerFactory.getLocalAddress(); } } @VisibleForTesting public Bookie getBookie() { return bookie; } /** * Suspend processing of requests in the bookie (for testing) */ @VisibleForTesting public void suspendProcessing() { nioServerFactory.suspendProcessing(); } /** * Resume processing requests in the bookie (for testing) */ @VisibleForTesting public void resumeProcessing() { nioServerFactory.resumeProcessing(); } public synchronized void shutdown() { if (!running) { return; } nioServerFactory.shutdown(); bookie.shutdown(); if (isAutoRecoveryDaemonEnabled && this.autoRecoveryMain != null) { this.autoRecoveryMain.shutdown(); } running = false; // unregister JMX unregisterJMX(); } protected void registerJMX() { try { jmxBkServerBean = new BookieServerBean(conf, this); BKMBeanRegistry.getInstance().register(jmxBkServerBean, null); bookie.registerJMX(jmxBkServerBean); } catch (Exception e) { LOG.warn("Failed to register with JMX", e); jmxBkServerBean = null; } } protected void unregisterJMX() { try { bookie.unregisterJMX(); if (jmxBkServerBean != null) { BKMBeanRegistry.getInstance().unregister(jmxBkServerBean); } } catch (Exception e) { LOG.warn("Failed to unregister with JMX", e); } jmxBkServerBean = null; } public boolean isRunning() { return bookie.isRunning() && nioServerFactory.isRunning() && running; } /** * Whether bookie is running? * * @return true if bookie is running, otherwise return false */ public boolean isBookieRunning() { return bookie.isRunning(); } /** * Whether auto-recovery service running with Bookie? * * @return true if auto-recovery service is running, otherwise return false */ public boolean isAutoRecoveryRunning() { return this.autoRecoveryMain != null && this.autoRecoveryMain.isAutoRecoveryRunning(); } /** * Whether nio server is running? * * @return true if nio server is running, otherwise return false */ public boolean isNioServerRunning() { return nioServerFactory.isRunning(); } public void join() throws InterruptedException { nioServerFactory.join(); } public int getExitCode() { int exitCode = bookie.getExitCode(); if (exitCode == ExitCode.OK) { if (nioServerFactory.hasCrashed()) { return ExitCode.SERVER_EXCEPTION; } } return exitCode; } /** * A thread to watch whether bookie & nioserver is still alive */ class DeathWatcher extends Thread { final int watchInterval; DeathWatcher(ServerConfiguration conf) { super("BookieDeathWatcher-" + conf.getBookiePort()); watchInterval = conf.getDeathWatchInterval(); } @Override public void run() { while(true) { try { Thread.sleep(watchInterval); } catch (InterruptedException ie) { // do nothing } if (!isBookieRunning() || !isNioServerRunning()) { shutdown(); break; } if (isAutoRecoveryDaemonEnabled && !isAutoRecoveryRunning()) { LOG.error("Autorecovery daemon has stopped. Please check the logs"); isAutoRecoveryDaemonEnabled = false; // to avoid spamming the logs } } } } static final Options bkOpts = new Options(); static { bkOpts.addOption("c", "conf", true, "Configuration for Bookie Server"); bkOpts.addOption("withAutoRecovery", false, "Start Autorecovery service Bookie server"); bkOpts.addOption("h", "help", false, "Print help message"); } /** * Print usage */ private static void printUsage() { HelpFormatter hf = new HelpFormatter(); hf.printHelp("BookieServer [options]\n\tor\n" + "BookieServer ", bkOpts); } private static void loadConfFile(ServerConfiguration conf, String confFile) throws IllegalArgumentException { try { conf.loadConf(new File(confFile).toURI().toURL()); } catch (MalformedURLException e) { LOG.error("Could not open configuration file: " + confFile, e); throw new IllegalArgumentException(); } catch (ConfigurationException e) { LOG.error("Malformed configuration file: " + confFile, e); throw new IllegalArgumentException(); } LOG.info("Using configuration file " + confFile); } private static ServerConfiguration parseArgs(String[] args) throws IllegalArgumentException { try { BasicParser parser = new BasicParser(); CommandLine cmdLine = parser.parse(bkOpts, args); if (cmdLine.hasOption('h')) { throw new IllegalArgumentException(); } ServerConfiguration conf = new ServerConfiguration(); String[] leftArgs = cmdLine.getArgs(); if (cmdLine.hasOption('c')) { if (null != leftArgs && leftArgs.length > 0) { throw new IllegalArgumentException(); } String confFile = cmdLine.getOptionValue("c"); loadConfFile(conf, confFile); return conf; } if (cmdLine.hasOption("withAutoRecovery")) { conf.setAutoRecoveryDaemonEnabled(true); } if (leftArgs.length < 4) { throw new IllegalArgumentException(); } // command line arguments overwrite settings in configuration file conf.setBookiePort(Integer.parseInt(leftArgs[0])); conf.setZkServers(leftArgs[1]); conf.setJournalDirName(leftArgs[2]); String[] ledgerDirNames = new String[leftArgs.length - 3]; System.arraycopy(leftArgs, 3, ledgerDirNames, 0, ledgerDirNames.length); conf.setLedgerDirNames(ledgerDirNames); return conf; } catch (ParseException e) { LOG.error("Error parsing command line arguments : ", e); throw new IllegalArgumentException(e); } } /** * @param args * @throws IOException * @throws InterruptedException */ public static void main(String[] args) { ServerConfiguration conf = null; try { conf = parseArgs(args); } catch (IllegalArgumentException iae) { LOG.error("Error parsing command line arguments : ", iae); System.err.println(iae.getMessage()); printUsage(); System.exit(ExitCode.INVALID_CONF); } StringBuilder sb = new StringBuilder(); String[] ledgerDirNames = conf.getLedgerDirNames(); for (int i = 0; i < ledgerDirNames.length; i++) { if (i != 0) { sb.append(','); } sb.append(ledgerDirNames[i]); } String hello = String.format( "Hello, I'm your bookie, listening on port %1$s. ZKServers are on %2$s. Journals are in %3$s. Ledgers are stored in %4$s.", conf.getBookiePort(), conf.getZkServers(), conf.getJournalDirName(), sb); LOG.info(hello); try { final BookieServer bs = new BookieServer(conf); bs.start(); Runtime.getRuntime().addShutdownHook(new Thread() { @Override public void run() { bs.shutdown(); LOG.info("Shut down bookie server successfully"); } }); LOG.info("Register shutdown hook successfully"); bs.join(); System.exit(bs.getExitCode()); } catch (Exception e) { LOG.error("Exception running bookie server : ", e); System.exit(ExitCode.SERVER_EXCEPTION); } } public void processPacket(ByteBuffer packet, Cnxn src) { PacketHeader h = PacketHeader.fromInt(packet.getInt()); boolean success = false; int statType = BKStats.STATS_UNKNOWN; long startTime = 0; if (isStatsEnabled) { startTime = MathUtils.now(); } // packet format is different between ADDENTRY and READENTRY long ledgerId = -1; long entryId = BookieProtocol.INVALID_ENTRY_ID; byte[] masterKey = null; switch (h.getOpCode()) { case BookieProtocol.ADDENTRY: // first read master key masterKey = new byte[BookieProtocol.MASTER_KEY_LENGTH]; packet.get(masterKey, 0, BookieProtocol.MASTER_KEY_LENGTH); ByteBuffer bb = packet.duplicate(); ledgerId = bb.getLong(); entryId = bb.getLong(); break; case BookieProtocol.READENTRY: ledgerId = packet.getLong(); entryId = packet.getLong(); break; } if (h.getVersion() < BookieProtocol.LOWEST_COMPAT_PROTOCOL_VERSION || h.getVersion() > BookieProtocol.CURRENT_PROTOCOL_VERSION) { LOG.error("Invalid protocol version, expected something between " + BookieProtocol.LOWEST_COMPAT_PROTOCOL_VERSION + " & " + BookieProtocol.CURRENT_PROTOCOL_VERSION + ". got " + h.getVersion()); src.sendResponse(buildResponse(BookieProtocol.EBADVERSION, h.getVersion(), h.getOpCode(), ledgerId, entryId)); return; } short flags = h.getFlags(); switch (h.getOpCode()) { case BookieProtocol.ADDENTRY: statType = BKStats.STATS_ADD; if (bookie.isReadOnly()) { LOG.warn("BookieServer is running as readonly mode," + " so rejecting the request from the client!"); src.sendResponse(buildResponse(BookieProtocol.EREADONLY, h.getVersion(), h.getOpCode(), ledgerId, entryId)); break; } try { TimedCnxn tsrc = new TimedCnxn(src, startTime); if ((flags & BookieProtocol.FLAG_RECOVERY_ADD) == BookieProtocol.FLAG_RECOVERY_ADD) { bookie.recoveryAddEntry(packet.slice(), this, tsrc, masterKey); } else { bookie.addEntry(packet.slice(), this, tsrc, masterKey); } success = true; } catch (IOException e) { LOG.error("Error writing " + entryId + "@" + ledgerId, e); src.sendResponse(buildResponse(BookieProtocol.EIO, h.getVersion(), h.getOpCode(), ledgerId, entryId)); } catch (BookieException.LedgerFencedException lfe) { LOG.error("Attempt to write to fenced ledger", lfe); src.sendResponse(buildResponse(BookieProtocol.EFENCED, h.getVersion(), h.getOpCode(), ledgerId, entryId)); } catch (BookieException e) { LOG.error("Unauthorized access to ledger " + ledgerId, e); src.sendResponse(buildResponse(BookieProtocol.EUA, h.getVersion(), h.getOpCode(), ledgerId, entryId)); } break; case BookieProtocol.READENTRY: statType = BKStats.STATS_READ; ByteBuffer[] rsp = new ByteBuffer[2]; LOG.debug("Received new read request: {}, {}", ledgerId, entryId); int errorCode = BookieProtocol.EIO; try { Future fenceResult = null; if ((flags & BookieProtocol.FLAG_DO_FENCING) == BookieProtocol.FLAG_DO_FENCING) { LOG.warn("Ledger " + ledgerId + " fenced by " + src.getPeerName()); if (h.getVersion() >= 2) { masterKey = new byte[BookieProtocol.MASTER_KEY_LENGTH]; packet.get(masterKey, 0, BookieProtocol.MASTER_KEY_LENGTH); fenceResult = bookie.fenceLedger(ledgerId, masterKey); } else { LOG.error("Password not provided, Not safe to fence {}", ledgerId); throw BookieException.create(BookieException.Code.UnauthorizedAccessException); } } rsp[1] = bookie.readEntry(ledgerId, entryId); LOG.debug("##### Read entry ##### {}", rsp[1].remaining()); if (null != fenceResult) { // TODO: // currently we don't have readCallback to run in separated read // threads. after BOOKKEEPER-429 is complete, we could improve // following code to make it not wait here // // For now, since we only try to wait after read entry. so writing // to journal and read entry are executed in different thread // it would be fine. try { Boolean fenced = fenceResult.get(1000, TimeUnit.MILLISECONDS); if (null == fenced || !fenced) { // if failed to fence, fail the read request to make it retry. errorCode = BookieProtocol.EIO; success = false; rsp[1] = null; } else { errorCode = BookieProtocol.EOK; success = true; } } catch (InterruptedException ie) { LOG.error("Interrupting fence read entry (lid:" + ledgerId + ", eid:" + entryId + ") :", ie); errorCode = BookieProtocol.EIO; success = false; rsp[1] = null; } catch (ExecutionException ee) { LOG.error("Failed to fence read entry (lid:" + ledgerId + ", eid:" + entryId + ") :", ee); errorCode = BookieProtocol.EIO; success = false; rsp[1] = null; } catch (TimeoutException te) { LOG.error("Timeout to fence read entry (lid:" + ledgerId + ", eid:" + entryId + ") :", te); errorCode = BookieProtocol.EIO; success = false; rsp[1] = null; } } else { errorCode = BookieProtocol.EOK; success = true; } } catch (Bookie.NoLedgerException e) { if (LOG.isTraceEnabled()) { LOG.error("Error reading " + entryId + "@" + ledgerId, e); } errorCode = BookieProtocol.ENOLEDGER; } catch (Bookie.NoEntryException e) { if (LOG.isTraceEnabled()) { LOG.error("Error reading " + entryId + "@" + ledgerId, e); } errorCode = BookieProtocol.ENOENTRY; } catch (IOException e) { if (LOG.isTraceEnabled()) { LOG.error("Error reading " + entryId + "@" + ledgerId, e); } errorCode = BookieProtocol.EIO; } catch (BookieException e) { LOG.error("Unauthorized access to ledger " + ledgerId, e); errorCode = BookieProtocol.EUA; } rsp[0] = buildResponse(errorCode, h.getVersion(), h.getOpCode(), ledgerId, entryId); if (LOG.isTraceEnabled()) { LOG.trace("Read entry rc = " + errorCode + " for " + entryId + "@" + ledgerId); } if (rsp[1] == null) { // We haven't filled in entry data, so we have to send back // the ledger and entry ids here rsp[1] = ByteBuffer.allocate(16); rsp[1].putLong(ledgerId); rsp[1].putLong(entryId); rsp[1].flip(); } if (LOG.isTraceEnabled()) { byte[] content = new byte[rsp[1].remaining()]; rsp[1].duplicate().get(content); LOG.trace("Sending response for: {}, content: {}", entryId, Hex.encodeHexString(content)); } else { LOG.debug("Sending response for: {}, length: {}", entryId, rsp[1].remaining()); } src.sendResponse(rsp); break; default: src.sendResponse(buildResponse(BookieProtocol.EBADREQ, h.getVersion(), h.getOpCode(), ledgerId, entryId)); } if (isStatsEnabled) { if (success) { // for add operations, we compute latency in writeComplete callbacks. if (statType != BKStats.STATS_ADD) { long elapsedTime = MathUtils.now() - startTime; bkStats.getOpStats(statType).updateLatency(elapsedTime); } } else { bkStats.getOpStats(statType).incrementFailedOps(); } } } private ByteBuffer buildResponse(int errorCode, byte version, byte opCode, long ledgerId, long entryId) { ByteBuffer rsp = ByteBuffer.allocate(24); rsp.putInt(new PacketHeader(version, opCode, (short)0).toInt()); rsp.putInt(errorCode); rsp.putLong(ledgerId); rsp.putLong(entryId); rsp.flip(); return rsp; } public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { TimedCnxn tcnxn = (TimedCnxn) ctx; Cnxn src = tcnxn.cnxn; long startTime = tcnxn.time; ByteBuffer bb = ByteBuffer.allocate(24); bb.putInt(new PacketHeader(BookieProtocol.CURRENT_PROTOCOL_VERSION, BookieProtocol.ADDENTRY, (short)0).toInt()); bb.putInt(rc); bb.putLong(ledgerId); bb.putLong(entryId); bb.flip(); if (LOG.isTraceEnabled()) { LOG.trace("Add entry rc = " + rc + " for " + entryId + "@" + ledgerId); } src.sendResponse(new ByteBuffer[] { bb }); if (isStatsEnabled) { // compute the latency if (0 == rc) { // for add operations, we compute latency in writeComplete callbacks. long elapsedTime = MathUtils.now() - startTime; bkStats.getOpStats(BKStats.STATS_ADD).updateLatency(elapsedTime); } else { bkStats.getOpStats(BKStats.STATS_ADD).incrementFailedOps(); } } } /** * A cnxn wrapper for time */ static class TimedCnxn { Cnxn cnxn; long time; public TimedCnxn(Cnxn cnxn, long startTime) { this.cnxn = cnxn; this.time = startTime; } } } BookieServerBean.java000066400000000000000000000050251244507361200346330ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.proto; import java.net.InetAddress; import java.net.UnknownHostException; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.jmx.BKMBeanInfo; import org.apache.bookkeeper.proto.BKStats; import org.apache.bookkeeper.proto.BKStats.OpStats; import org.apache.bookkeeper.proto.BKStats.OpStatData; /** * Bookie Server Bean */ public class BookieServerBean implements BookieServerMXBean, BKMBeanInfo { protected final BookieServer bks; protected final ServerConfiguration conf; private final String name; public BookieServerBean(ServerConfiguration conf, BookieServer bks) { this.conf = conf; this.bks = bks; name = "BookieServer_" + conf.getBookiePort(); } @Override public String getName() { return name; } @Override public boolean isHidden() { return false; } @Override public long getNumPacketsReceived() { return ServerStats.getInstance().getPacketsReceived(); } @Override public long getNumPacketsSent() { return ServerStats.getInstance().getPacketsSent(); } @Override public OpStatData getAddStats() { return bks.bkStats.getOpStats(BKStats.STATS_ADD).toOpStatData(); } @Override public OpStatData getReadStats() { return bks.bkStats.getOpStats(BKStats.STATS_READ).toOpStatData(); } @Override public String getServerPort() { try { return StringUtils.addrToString(Bookie.getBookieAddress(conf)); } catch (UnknownHostException e) { return "localhost:" + conf.getBookiePort(); } } } BookieServerMXBean.java000066400000000000000000000025511244507361200351010ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.proto; import org.apache.bookkeeper.proto.BKStats.OpStatData; /** * Bookie Server MBean */ public interface BookieServerMXBean { /** * @return packets received */ public long getNumPacketsReceived(); /** * @return packets sent */ public long getNumPacketsSent(); /** * @return add stats */ public OpStatData getAddStats(); /** * @return read stats */ public OpStatData getReadStats(); /** * @return server port */ public String getServerPort(); } BookkeeperInternalCallbacks.java000066400000000000000000000114531244507361200370330ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.proto; import java.net.InetSocketAddress; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.zookeeper.AsyncCallback; import org.jboss.netty.buffer.ChannelBuffer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Declaration of a callback interfaces used in bookkeeper client library but * not exposed to the client application. */ public class BookkeeperInternalCallbacks { static final Logger LOG = LoggerFactory.getLogger(BookkeeperInternalCallbacks.class); /** * Callback for calls from BookieClient objects. Such calls are for replies * of write operations (operations to add an entry to a ledger). * */ /** * Listener on ledger metadata changes. */ public interface LedgerMetadataListener { /** * Triggered each time ledger metadata changed. * * @param ledgerId * ledger id. * @param metadata * new ledger metadata. */ void onChanged(long ledgerId, LedgerMetadata metadata); } public interface WriteCallback { void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx); } public interface GenericCallback { void operationComplete(int rc, T result); } /** * Declaration of a callback implementation for calls from BookieClient objects. * Such calls are for replies of read operations (operations to read an entry * from a ledger). * */ public interface ReadEntryCallback { void readEntryComplete(int rc, long ledgerId, long entryId, ChannelBuffer buffer, Object ctx); } /** * This is a multi callback object that waits for all of * the multiple async operations to complete. If any fail, then we invoke * the final callback with a provided failureRc */ public static class MultiCallback implements AsyncCallback.VoidCallback { // Number of expected callbacks final int expected; final int failureRc; final int successRc; // Final callback and the corresponding context to invoke final AsyncCallback.VoidCallback cb; final Object context; // This keeps track of how many operations have completed final AtomicInteger done = new AtomicInteger(); // List of the exceptions from operations that completed unsuccessfully final LinkedBlockingQueue exceptions = new LinkedBlockingQueue(); public MultiCallback(int expected, AsyncCallback.VoidCallback cb, Object context, int successRc, int failureRc) { this.expected = expected; this.cb = cb; this.context = context; this.failureRc = failureRc; this.successRc = successRc; if (expected == 0) { cb.processResult(successRc, null, context); } } private void tick() { if (done.incrementAndGet() == expected) { if (exceptions.isEmpty()) { cb.processResult(successRc, null, context); } else { cb.processResult(failureRc, null, context); } } } @Override public void processResult(int rc, String path, Object ctx) { if (rc != successRc) { LOG.error("Error in multi callback : " + rc); exceptions.add(rc); } tick(); } } /** * Processor to process a specific element */ public static interface Processor { /** * Process a specific element * * @param data * data to process * @param iterationCallback * Callback to invoke when process has been done. */ public void process(T data, AsyncCallback.VoidCallback cb); } } NIOServerFactory.java000066400000000000000000000503001244507361200346060ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.proto; import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.nio.channels.CancelledKeyException; import java.nio.channels.Channel; import java.nio.channels.SelectionKey; import java.nio.channels.Selector; import java.nio.channels.ServerSocketChannel; import java.nio.channels.SocketChannel; import java.util.ArrayList; import java.util.Collections; import java.util.HashSet; import java.util.Iterator; import java.util.Set; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.conf.ServerConfiguration; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.common.annotations.VisibleForTesting; /** * This class handles communication with clients using NIO. There is one Cnxn * per client, but only one thread doing the communication. */ public class NIOServerFactory extends Thread { public interface PacketProcessor { public void processPacket(ByteBuffer packet, Cnxn src); } ServerStats stats = new ServerStats(); Logger LOG = LoggerFactory.getLogger(NIOServerFactory.class); ServerSocketChannel ss; Selector selector = Selector.open(); /** * We use this buffer to do efficient socket I/O. Since there is a single * sender thread per NIOServerCnxn instance, we can use a member variable to * only allocate it once. */ ByteBuffer directBuffer = ByteBuffer.allocateDirect(64 * 1024); HashSet cnxns = new HashSet(); int outstandingLimit = 2000; PacketProcessor processor; long minLatency = 99999999; ServerConfiguration conf; private AtomicBoolean crashed = new AtomicBoolean(false); private Object suspensionLock = new Object(); private boolean suspended = false; public NIOServerFactory(ServerConfiguration conf, PacketProcessor processor) throws IOException { super("NIOServerFactory-" + conf.getBookiePort()); setDaemon(true); this.processor = processor; this.conf = conf; this.ss = ServerSocketChannel.open(); if (conf.getListeningInterface() == null) { // listen on all interfaces ss.socket().bind(new InetSocketAddress(conf.getBookiePort())); } else { ss.socket().bind(Bookie.getBookieAddress(conf)); } ss.configureBlocking(false); ss.register(selector, SelectionKey.OP_ACCEPT); } public InetSocketAddress getLocalAddress() { return (InetSocketAddress) ss.socket().getLocalSocketAddress(); } private void addCnxn(Cnxn cnxn) { synchronized (cnxns) { cnxns.add(cnxn); } } public boolean isRunning() { return !ss.socket().isClosed() && isAlive(); } boolean hasCrashed() { return crashed.get(); } /** * Stop nio server from processing requests. (for testing) */ @VisibleForTesting public void suspendProcessing() { synchronized(suspensionLock) { suspended = true; } } /** * Resume processing requests in nio server. (for testing) */ @VisibleForTesting public void resumeProcessing() { synchronized(suspensionLock) { suspended = false; suspensionLock.notify(); } } @Override public void run() { while (!ss.socket().isClosed()) { try { selector.select(1000); synchronized(suspensionLock) { while (suspended) { suspensionLock.wait(); } } Set selected; synchronized (this) { selected = selector.selectedKeys(); } ArrayList selectedList = new ArrayList(selected); Collections.shuffle(selectedList); for (SelectionKey k : selectedList) { if ((k.readyOps() & SelectionKey.OP_ACCEPT) != 0) { SocketChannel sc = ((ServerSocketChannel) k.channel()).accept(); sc.configureBlocking(false); SelectionKey sk = sc.register(selector, SelectionKey.OP_READ); Cnxn cnxn = new Cnxn(sc, sk); sk.attach(cnxn); addCnxn(cnxn); } else if ((k.readyOps() & (SelectionKey.OP_READ | SelectionKey.OP_WRITE)) != 0) { Cnxn c = (Cnxn) k.attachment(); c.doIO(k); } } selected.clear(); } catch (Exception e) { LOG.warn("Exception in server socket loop: " + ss.socket().getInetAddress(), e); } catch (Throwable e) { LOG.error("Error in server socket loop: " + ss.socket().getInetAddress(), e); crashed.set(true); break; } } LOG.info("NIOServerCnxn factory exitedloop."); clear(); } /** * clear all the connections in the selector * */ synchronized public void clear() { selector.wakeup(); synchronized (cnxns) { // got to clear all the connections that we have in the selector for (Iterator it = cnxns.iterator(); it.hasNext();) { Cnxn cnxn = it.next(); it.remove(); try { cnxn.close(); } catch (Exception e) { // Do nothing. } } } } public void shutdown() { try { ss.close(); clear(); this.interrupt(); this.join(); } catch (InterruptedException e) { LOG.warn("Interrupted", e); } catch (Exception e) { LOG.error("Unexpected exception", e); } } /** * The buffer will cause the connection to be close when we do a send. */ static final ByteBuffer closeConn = ByteBuffer.allocate(0); public class Cnxn { private SocketChannel sock; private SelectionKey sk; boolean initialized; ByteBuffer lenBuffer = ByteBuffer.allocate(4); ByteBuffer incomingBuffer = lenBuffer; LinkedBlockingQueue outgoingBuffers = new LinkedBlockingQueue(); int sessionTimeout; void doIO(SelectionKey k) throws InterruptedException { try { if (sock == null) { return; } if (k.isReadable()) { int rc = sock.read(incomingBuffer); if (rc < 0) { LOG.info("Peer closed connection. rc={} {}", rc, sock); close(); return; } if (incomingBuffer.remaining() == 0) { incomingBuffer.flip(); if (incomingBuffer == lenBuffer) { readLength(k); } else { cnxnStats.packetsReceived++; ServerStats.getInstance().incrementPacketsReceived(); try { readRequest(); } finally { lenBuffer.clear(); incomingBuffer = lenBuffer; } } } } if (k.isWritable()) { if (outgoingBuffers.size() > 0) { // ZooLog.logTraceMessage(LOG, // ZooLog.CLIENT_DATA_PACKET_TRACE_MASK, // "sk " + k + " is valid: " + // k.isValid()); /* * This is going to reset the buffer position to 0 and * the limit to the size of the buffer, so that we can * fill it with data from the non-direct buffers that we * need to send. */ directBuffer.clear(); for (ByteBuffer b : outgoingBuffers) { if (directBuffer.remaining() < b.remaining()) { /* * When we call put later, if the directBuffer * is to small to hold everything, nothing will * be copied, so we've got to slice the buffer * if it's too big. */ b = (ByteBuffer) b.slice().limit(directBuffer.remaining()); } /* * put() is going to modify the positions of both * buffers, put we don't want to change the position * of the source buffers (we'll do that after the * send, if needed), so we save and reset the * position after the copy */ int p = b.position(); directBuffer.put(b); b.position(p); if (directBuffer.remaining() == 0) { break; } } /* * Do the flip: limit becomes position, position gets * set to 0. This sets us up for the write. */ directBuffer.flip(); int sent = sock.write(directBuffer); ByteBuffer bb; // Remove the buffers that we have sent while (outgoingBuffers.size() > 0) { bb = outgoingBuffers.peek(); if (bb == closeConn) { throw new IOException("closing"); } int left = bb.remaining() - sent; if (left > 0) { /* * We only partially sent this buffer, so we * update the position and exit the loop. */ bb.position(bb.position() + sent); break; } cnxnStats.packetsSent++; /* We've sent the whole buffer, so drop the buffer */ sent -= bb.remaining(); ServerStats.getInstance().incrementPacketsSent(); outgoingBuffers.remove(); } // ZooLog.logTraceMessage(LOG, // ZooLog.CLIENT_DATA_PACKET_TRACE_MASK, "after send, // outgoingBuffers.size() = " + outgoingBuffers.size()); } synchronized (this) { if (outgoingBuffers.size() == 0) { if (!initialized && (sk.interestOps() & SelectionKey.OP_READ) == 0) { throw new IOException("Responded to info probe"); } sk.interestOps(sk.interestOps() & (~SelectionKey.OP_WRITE)); } else { sk.interestOps(sk.interestOps() | SelectionKey.OP_WRITE); } } } } catch (CancelledKeyException e) { close(); } catch (IOException e) { // LOG.error("FIXMSG",e); close(); } } private void readRequest() throws IOException { incomingBuffer = incomingBuffer.slice(); processor.processPacket(incomingBuffer, this); } public void disableRecv() { sk.interestOps(sk.interestOps() & (~SelectionKey.OP_READ)); } public void enableRecv() { if (sk.isValid()) { int interest = sk.interestOps(); if ((interest & SelectionKey.OP_READ) == 0) { sk.interestOps(interest | SelectionKey.OP_READ); } } } private void readLength(SelectionKey k) throws IOException { // Read the length, now get the buffer int len = lenBuffer.getInt(); if (len < 0 || len > 0xfffff) { throw new IOException("Len error " + len); } incomingBuffer = ByteBuffer.allocate(len); } /** * The number of requests that have been submitted but not yet responded * to. */ int outstandingRequests; /* * (non-Javadoc) * * @see org.apache.zookeeper.server.ServerCnxnIface#getSessionTimeout() */ public int getSessionTimeout() { return sessionTimeout; } String peerName = null; public Cnxn(SocketChannel sock, SelectionKey sk) throws IOException { this.sock = sock; this.sk = sk; sock.socket().setTcpNoDelay(conf.getServerTcpNoDelay()); sock.socket().setSoLinger(true, 2); sk.interestOps(SelectionKey.OP_READ); if (LOG.isTraceEnabled()) { peerName = sock.socket().toString(); } lenBuffer.clear(); incomingBuffer = lenBuffer; } @Override public String toString() { return "NIOServerCnxn object with sock = " + sock + " and sk = " + sk; } public String getPeerName() { if (peerName == null) { peerName = sock.socket().toString(); } return peerName; } boolean closed; /* * (non-Javadoc) * * @see org.apache.zookeeper.server.ServerCnxnIface#close() */ public void close() { if (closed) { return; } closed = true; synchronized (cnxns) { cnxns.remove(this); } LOG.debug("close NIOServerCnxn: {}", sock); try { /* * The following sequence of code is stupid! You would think * that only sock.close() is needed, but alas, it doesn't work * that way. If you just do sock.close() there are cases where * the socket doesn't actually close... */ sock.socket().shutdownOutput(); } catch (IOException e) { // This is a relatively common exception that we can't avoid } try { sock.socket().shutdownInput(); } catch (IOException e) { } try { sock.socket().close(); } catch (IOException e) { LOG.error("FIXMSG", e); } try { sock.close(); // XXX The next line doesn't seem to be needed, but some posts // to forums suggest that it is needed. Keep in mind if errors // in // this section arise. // factory.selector.wakeup(); } catch (IOException e) { LOG.error("FIXMSG", e); } sock = null; if (sk != null) { try { // need to cancel this selection key from the selector sk.cancel(); } catch (Exception e) { } } } private void makeWritable(SelectionKey sk) { try { selector.wakeup(); if (sk.isValid()) { sk.interestOps(sk.interestOps() | SelectionKey.OP_WRITE); } } catch (RuntimeException e) { LOG.error("Problem setting writable", e); throw e; } } private void sendBuffers(ByteBuffer bb[]) { ByteBuffer len = ByteBuffer.allocate(4); int total = 0; for (int i = 0; i < bb.length; i++) { if (bb[i] != null) { total += bb[i].remaining(); } } LOG.debug("Sending response of size {} to {}", total, peerName); len.putInt(total); len.flip(); outgoingBuffers.add(len); for (int i = 0; i < bb.length; i++) { if (bb[i] != null) { outgoingBuffers.add(bb[i]); } } makeWritable(sk); } public void sendResponse(ByteBuffer... bb) { synchronized (this) { if (closed) { return; } sendBuffers(bb); outstandingRequests--; } // acquire these monitors in order to avoid deadlock during shutdown // it doesn't matter much whether we do this synchronusly with sendBuffers, as long as it happens synchronized (NIOServerFactory.this) { synchronized (this) { // check throttling if (outstandingRequests < outstandingLimit) { sk.selector().wakeup(); enableRecv(); } } } } public InetSocketAddress getRemoteAddress() { return (InetSocketAddress) sock.socket().getRemoteSocketAddress(); } private class CnxnStats { long packetsSent = 0; long packetsReceived = 0; /** * The number of requests that have been submitted but not yet * responded to. */ public long getOutstandingRequests() { synchronized(Cnxn.this) { return outstandingRequests; } } public long getPacketsReceived() { return packetsReceived; } public long getPacketsSent() { return packetsSent; } @Override public String toString() { StringBuilder sb = new StringBuilder(); Channel channel = sk.channel(); if (channel instanceof SocketChannel) { sb.append(" ").append(((SocketChannel) channel).socket().getRemoteSocketAddress()).append("[") .append(Integer.toHexString(sk.interestOps())).append("](queued=").append( getOutstandingRequests()).append(",recved=").append(getPacketsReceived()).append( ",sent=").append(getPacketsSent()).append(")\n"); } return sb.toString(); } } private CnxnStats cnxnStats = new CnxnStats(); public CnxnStats getStats() { return cnxnStats; } } } PerChannelBookieClient.java000066400000000000000000001017161244507361200357610ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.proto; import java.io.IOException; import java.net.InetSocketAddress; import java.nio.channels.ClosedChannelException; import java.util.ArrayDeque; import java.util.Queue; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.proto.BookieProtocol.PacketHeader; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.ReadEntryCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.util.MathUtils; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.apache.bookkeeper.util.SafeRunnable; import org.jboss.netty.bootstrap.ClientBootstrap; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBuffers; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.jboss.netty.channel.ChannelHandlerContext; import org.jboss.netty.channel.ChannelPipeline; import org.jboss.netty.channel.ChannelPipelineCoverage; import org.jboss.netty.channel.ChannelPipelineFactory; import org.jboss.netty.channel.ChannelStateEvent; import org.jboss.netty.channel.Channels; import org.jboss.netty.channel.ExceptionEvent; import org.jboss.netty.channel.MessageEvent; import org.jboss.netty.channel.SimpleChannelHandler; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.jboss.netty.handler.codec.frame.CorruptedFrameException; import org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder; import org.jboss.netty.handler.codec.frame.TooLongFrameException; import org.jboss.netty.handler.timeout.ReadTimeoutHandler; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This class manages all details of connection to a particular bookie. It also * has reconnect logic if a connection to a bookie fails. * */ @ChannelPipelineCoverage("one") public class PerChannelBookieClient extends SimpleChannelHandler implements ChannelPipelineFactory { static final Logger LOG = LoggerFactory.getLogger(PerChannelBookieClient.class); static final long maxMemory = Runtime.getRuntime().maxMemory() / 5; public static final int MAX_FRAME_LENGTH = 2 * 1024 * 1024; // 2M InetSocketAddress addr; AtomicLong totalBytesOutstanding; ClientSocketChannelFactory channelFactory; OrderedSafeExecutor executor; ScheduledExecutorService timeoutExecutor; ConcurrentHashMap addCompletions = new ConcurrentHashMap(); ConcurrentHashMap readCompletions = new ConcurrentHashMap(); /** * The following member variables do not need to be concurrent, or volatile * because they are always updated under a lock */ Queue> pendingOps = new ArrayDeque>(); volatile Channel channel = null; private class TimeoutTask implements Runnable { @Override public void run() { errorOutTimedOutEntries(); } } enum ConnectionState { DISCONNECTED, CONNECTING, CONNECTED, CLOSED }; volatile ConnectionState state; private final ClientConfiguration conf; /** * Error out any entries that have timed out. */ private void errorOutTimedOutEntries() { int numAdd = 0, numRead = 0; int total = 0; try { for (CompletionKey key : addCompletions.keySet()) { total++; if (key.shouldTimeout(conf.getAddEntryTimeout() * 1000)) { errorOutAddKey(key); numAdd++; } } for (CompletionKey key : readCompletions.keySet()) { total++; if (key.shouldTimeout(conf.getReadEntryTimeout() * 1000)) { errorOutReadKey(key); numRead++; } } } catch (Throwable t) { LOG.error("Caught RuntimeException while erroring out timed out entries : ", t); } if (numAdd + numRead > 0) { LOG.info("Timeout task iterated through a total of {} keys.", total); LOG.info("Timeout Task errored out {} add entry requests.", numAdd); LOG.info("Timeout Task errored out {} read entry requests.", numRead); } } public PerChannelBookieClient(OrderedSafeExecutor executor, ClientSocketChannelFactory channelFactory, InetSocketAddress addr, AtomicLong totalBytesOutstanding, ScheduledExecutorService timeoutExecutor) { this(new ClientConfiguration(), executor, channelFactory, addr, totalBytesOutstanding, timeoutExecutor); } public PerChannelBookieClient(OrderedSafeExecutor executor, ClientSocketChannelFactory channelFactory, InetSocketAddress addr, AtomicLong totalBytesOutstanding) { this(new ClientConfiguration(), executor, channelFactory, addr, totalBytesOutstanding, null); } public PerChannelBookieClient(ClientConfiguration conf, OrderedSafeExecutor executor, ClientSocketChannelFactory channelFactory, InetSocketAddress addr, AtomicLong totalBytesOutstanding, ScheduledExecutorService timeoutExecutor) { this.conf = conf; this.addr = addr; this.executor = executor; this.totalBytesOutstanding = totalBytesOutstanding; this.channelFactory = channelFactory; this.state = ConnectionState.DISCONNECTED; this.timeoutExecutor = timeoutExecutor; // scheudle the timeout task if (null != this.timeoutExecutor) { this.timeoutExecutor.scheduleWithFixedDelay(new TimeoutTask(), conf.getTimeoutTaskIntervalMillis(), conf.getTimeoutTaskIntervalMillis(), TimeUnit.MILLISECONDS); } } private void connect() { LOG.info("Connecting to bookie: {}", addr); // Set up the ClientBootStrap so we can create a new Channel connection // to the bookie. ClientBootstrap bootstrap = new ClientBootstrap(channelFactory); bootstrap.setPipelineFactory(this); bootstrap.setOption("tcpNoDelay", conf.getClientTcpNoDelay()); bootstrap.setOption("keepAlive", true); ChannelFuture future = bootstrap.connect(addr); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { LOG.debug("Channel connected ({}) {}", future.isSuccess(), future.getChannel()); int rc; Queue> oldPendingOps; synchronized (PerChannelBookieClient.this) { if (future.isSuccess() && state == ConnectionState.CONNECTING) { LOG.info("Successfully connected to bookie: {}", future.getChannel()); rc = BKException.Code.OK; channel = future.getChannel(); state = ConnectionState.CONNECTED; } else if (future.isSuccess() && (state == ConnectionState.CLOSED || state == ConnectionState.DISCONNECTED)) { LOG.warn("Closed before connection completed, clean up: {}, current state {}", future.getChannel(), state); closeChannel(future.getChannel()); rc = BKException.Code.BookieHandleNotAvailableException; channel = null; } else if (future.isSuccess() && state == ConnectionState.CONNECTED) { LOG.debug("Already connected with another channel({}), so close the new channel({})", channel, future.getChannel()); closeChannel(future.getChannel()); return; // pendingOps should have been completed when other channel connected } else { LOG.error("Could not connect to bookie: {}/{}, current state {} : ", new Object[] { future.getChannel(), addr, state, future.getCause() }); rc = BKException.Code.BookieHandleNotAvailableException; closeChannel(future.getChannel()); channel = null; if (state != ConnectionState.CLOSED) { state = ConnectionState.DISCONNECTED; } } // trick to not do operations under the lock, take the list // of pending ops and assign it to a new variable, while // emptying the pending ops by just assigning it to a new // list oldPendingOps = pendingOps; pendingOps = new ArrayDeque>(); } for (GenericCallback pendingOp : oldPendingOps) { pendingOp.operationComplete(rc, null); } } }); } void connectIfNeededAndDoOp(GenericCallback op) { boolean completeOpNow = false; int opRc = BKException.Code.OK; // common case without lock first if (channel != null && state == ConnectionState.CONNECTED) { completeOpNow = true; } else { synchronized (this) { // check the channel status again under lock if (channel != null && state == ConnectionState.CONNECTED) { completeOpNow = true; opRc = BKException.Code.OK; } else if (state == ConnectionState.CLOSED) { completeOpNow = true; opRc = BKException.Code.BookieHandleNotAvailableException; } else { // channel is either null (first connection attempt), or the // channel is disconnected. Connection attempt is still in // progress, queue up this op. Op will be executed when // connection attempt either fails or succeeds pendingOps.add(op); if (state == ConnectionState.CONNECTING) { // just return as connection request has already send // and waiting for the response. return; } // switch state to connecting and do connection attempt state = ConnectionState.CONNECTING; } } if (!completeOpNow) { // Start connection attempt to the input server host. connect(); } } if (completeOpNow) { op.operationComplete(opRc, null); } } /** * This method should be called only after connection has been checked for * {@link #connectIfNeededAndDoOp(GenericCallback)} * * @param ledgerId * @param masterKey * @param entryId * @param toSend * @param cb * @param ctx * @param options */ void addEntry(final long ledgerId, byte[] masterKey, final long entryId, ChannelBuffer toSend, WriteCallback cb, Object ctx, final int options) { final int entrySize = toSend.readableBytes(); final CompletionKey completionKey = new CompletionKey(ledgerId, entryId); addCompletions.put(completionKey, new AddCompletion(cb, entrySize, ctx)); int totalHeaderSize = 4 // for the length of the packet + 4 // for the type of request + BookieProtocol.MASTER_KEY_LENGTH; // for the master key try{ ChannelBuffer header = channel.getConfig().getBufferFactory().getBuffer(totalHeaderSize); header.writeInt(totalHeaderSize - 4 + entrySize); header.writeInt(new PacketHeader(BookieProtocol.CURRENT_PROTOCOL_VERSION, BookieProtocol.ADDENTRY, (short)options).toInt()); header.writeBytes(masterKey, 0, BookieProtocol.MASTER_KEY_LENGTH); ChannelBuffer wrappedBuffer = ChannelBuffers.wrappedBuffer(header, toSend); final Channel c = channel; if (c == null) { errorOutReadKey(completionKey); return; } ChannelFuture future = c.write(wrappedBuffer); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (future.isSuccess()) { if (LOG.isDebugEnabled()) { LOG.debug("Successfully wrote request for adding entry: " + entryId + " ledger-id: " + ledgerId + " bookie: " + c.getRemoteAddress() + " entry length: " + entrySize); } // totalBytesOutstanding.addAndGet(entrySize); } else { if (!(future.getCause() instanceof ClosedChannelException)) { LOG.warn("Writing addEntry(lid={}, eid={}) to channel {} failed : ", new Object[] { ledgerId, entryId, c, future.getCause() }); } errorOutAddKey(completionKey); } } }); } catch (Throwable e) { LOG.warn("Add entry operation failed", e); errorOutAddKey(completionKey); } } public void readEntryAndFenceLedger(final long ledgerId, byte[] masterKey, final long entryId, ReadEntryCallback cb, Object ctx) { final CompletionKey key = new CompletionKey(ledgerId, entryId); readCompletions.put(key, new ReadCompletion(cb, ctx)); int totalHeaderSize = 4 // for the length of the packet + 4 // for request type + 8 // for ledgerId + 8 // for entryId + BookieProtocol.MASTER_KEY_LENGTH; // for masterKey ChannelBuffer tmpEntry = channel.getConfig().getBufferFactory().getBuffer(totalHeaderSize); tmpEntry.writeInt(totalHeaderSize - 4); tmpEntry.writeInt(new PacketHeader(BookieProtocol.CURRENT_PROTOCOL_VERSION, BookieProtocol.READENTRY, BookieProtocol.FLAG_DO_FENCING).toInt()); tmpEntry.writeLong(ledgerId); tmpEntry.writeLong(entryId); tmpEntry.writeBytes(masterKey, 0, BookieProtocol.MASTER_KEY_LENGTH); final Channel c = channel; if (c == null) { errorOutReadKey(key); return; } ChannelFuture future = c.write(tmpEntry); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (future.isSuccess()) { if (LOG.isDebugEnabled()) { LOG.debug("Successfully wrote request for reading entry: " + entryId + " ledger-id: " + ledgerId + " bookie: " + c.getRemoteAddress()); } } else { if (!(future.getCause() instanceof ClosedChannelException)) { LOG.warn("Writing readEntryAndFenceLedger(lid={}, eid={}) to channel {} failed : ", new Object[] { ledgerId, entryId, c, future.getCause() }); } errorOutReadKey(key); } } }); } public void readEntry(final long ledgerId, final long entryId, ReadEntryCallback cb, Object ctx) { final CompletionKey key = new CompletionKey(ledgerId, entryId); readCompletions.put(key, new ReadCompletion(cb, ctx)); int totalHeaderSize = 4 // for the length of the packet + 4 // for request type + 8 // for ledgerId + 8; // for entryId try{ ChannelBuffer tmpEntry = channel.getConfig().getBufferFactory().getBuffer(totalHeaderSize); tmpEntry.writeInt(totalHeaderSize - 4); tmpEntry.writeInt(new PacketHeader(BookieProtocol.CURRENT_PROTOCOL_VERSION, BookieProtocol.READENTRY, BookieProtocol.FLAG_NONE).toInt()); tmpEntry.writeLong(ledgerId); tmpEntry.writeLong(entryId); final Channel c = channel; if (c == null) { errorOutReadKey(key); return; } ChannelFuture future = c.write(tmpEntry); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (future.isSuccess()) { if (LOG.isDebugEnabled()) { LOG.debug("Successfully wrote request for reading entry: " + entryId + " ledger-id: " + ledgerId + " bookie: " + c.getRemoteAddress()); } } else { if (!(future.getCause() instanceof ClosedChannelException)) { LOG.warn("Writing readEntry(lid={}, eid={}) to channel {} failed : ", new Object[] { ledgerId, entryId, c, future.getCause() }); } errorOutReadKey(key); } } }); } catch(Throwable e) { LOG.warn("Read entry operation failed", e); errorOutReadKey(key); } } /** * Disconnects the bookie client. It can be reused. */ public void disconnect() { closeInternal(false); } /** * Closes the bookie client permanently. It cannot be reused. */ public void close() { closeInternal(true); } private void closeInternal(boolean permanent) { Channel toClose = null; synchronized (this) { if (permanent) { state = ConnectionState.CLOSED; } else if (state != ConnectionState.CLOSED) { state = ConnectionState.DISCONNECTED; } toClose = channel; channel = null; } if (toClose != null) { closeChannel(toClose).awaitUninterruptibly(); } } private ChannelFuture closeChannel(Channel c) { LOG.debug("Closing channel {}", c); ReadTimeoutHandler timeout = c.getPipeline().get(ReadTimeoutHandler.class); if (timeout != null) { timeout.releaseExternalResources(); } return c.close(); } void errorOutReadKey(final CompletionKey key) { executor.submitOrdered(key.ledgerId, new SafeRunnable() { @Override public void safeRun() { ReadCompletion readCompletion = readCompletions.remove(key); String bAddress = "null"; Channel c = channel; if(c != null) { bAddress = c.getRemoteAddress().toString(); } if (readCompletion != null) { LOG.debug("Could not write request for reading entry: {}" + " ledger-id: {} bookie: {}", new Object[] { key.entryId, key.ledgerId, bAddress }); readCompletion.cb.readEntryComplete(BKException.Code.BookieHandleNotAvailableException, key.ledgerId, key.entryId, null, readCompletion.ctx); } } }); } void errorOutAddKey(final CompletionKey key) { executor.submitOrdered(key.ledgerId, new SafeRunnable() { @Override public void safeRun() { AddCompletion addCompletion = addCompletions.remove(key); if (addCompletion != null) { String bAddress = "null"; Channel c = channel; if(c != null) { bAddress = c.getRemoteAddress().toString(); } LOG.debug("Could not write request for adding entry: {} ledger-id: {} bookie: {}", new Object[] { key.entryId, key.ledgerId, bAddress }); addCompletion.cb.writeComplete(BKException.Code.BookieHandleNotAvailableException, key.ledgerId, key.entryId, addr, addCompletion.ctx); LOG.debug("Invoked callback method: {}", key.entryId); } } }); } /** * Errors out pending entries. We call this method from one thread to avoid * concurrent executions to QuorumOpMonitor (implements callbacks). It seems * simpler to call it from BookieHandle instead of calling directly from * here. */ void errorOutOutstandingEntries() { // DO NOT rewrite these using Map.Entry iterations. We want to iterate // on keys and see if we are successfully able to remove the key from // the map. Because the add and the read methods also do the same thing // in case they get a write failure on the socket. The one who // successfully removes the key from the map is the one responsible for // calling the application callback. for (CompletionKey key : addCompletions.keySet()) { errorOutAddKey(key); } for (CompletionKey key : readCompletions.keySet()) { errorOutReadKey(key); } } /** * In the netty pipeline, we need to split packets based on length, so we * use the {@link LengthFieldBasedFrameDecoder}. Other than that all actions * are carried out in this class, e.g., making sense of received messages, * prepending the length to outgoing packets etc. */ @Override public ChannelPipeline getPipeline() throws Exception { ChannelPipeline pipeline = Channels.pipeline(); pipeline.addLast("lengthbasedframedecoder", new LengthFieldBasedFrameDecoder(MAX_FRAME_LENGTH, 0, 4, 0, 4)); pipeline.addLast("mainhandler", this); return pipeline; } /** * If our channel has disconnected, we just error out the pending entries */ @Override public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception { Channel c = ctx.getChannel(); LOG.info("Disconnected from bookie channel {}", c); if (c != null) { closeChannel(c); } errorOutOutstandingEntries(); synchronized (this) { if (this.channel == c && state != ConnectionState.CLOSED) { state = ConnectionState.DISCONNECTED; } } // we don't want to reconnect right away. If someone sends a request to // this address, we will reconnect. } /** * Called by netty when an exception happens in one of the netty threads * (mostly due to what we do in the netty threads) */ @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception { Throwable t = e.getCause(); if (t instanceof CorruptedFrameException || t instanceof TooLongFrameException) { LOG.error("Corrupted frame received from bookie: {}", e.getChannel().getRemoteAddress()); return; } if (t instanceof IOException) { // these are thrown when a bookie fails, logging them just pollutes // the logs (the failure is logged from the listeners on the write // operation), so I'll just ignore it here. return; } synchronized (this) { if (state == ConnectionState.CLOSED) { LOG.debug("Unexpected exception caught by bookie client channel handler, " + "but the client is closed, so it isn't important", t); } else { LOG.error("Unexpected exception caught by bookie client channel handler", t); } } // Since we are a library, cant terminate App here, can we? } /** * Called by netty when a message is received on a channel */ @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception { if (!(e.getMessage() instanceof ChannelBuffer)) { ctx.sendUpstream(e); return; } final ChannelBuffer buffer = (ChannelBuffer) e.getMessage(); final int rc; final long ledgerId, entryId; final PacketHeader header; try { header = PacketHeader.fromInt(buffer.readInt()); rc = buffer.readInt(); ledgerId = buffer.readLong(); entryId = buffer.readLong(); } catch (IndexOutOfBoundsException ex) { LOG.error("Unparseable response from bookie: " + addr, ex); return; } executor.submitOrdered(ledgerId, new SafeRunnable() { @Override public void safeRun() { switch (header.getOpCode()) { case BookieProtocol.ADDENTRY: handleAddResponse(ledgerId, entryId, rc); break; case BookieProtocol.READENTRY: handleReadResponse(ledgerId, entryId, rc, buffer); break; default: LOG.error("Unexpected response, type: " + header.getOpCode() + " received from bookie: " + addr + " , ignoring"); } } }); } void handleAddResponse(long ledgerId, long entryId, int rc) { if (LOG.isDebugEnabled()) { LOG.debug("Got response for add request from bookie: {} for ledger: {} entry: {}" + " rc: {}", new Object[] { addr, ledgerId, entryId, rc }); } // convert to BKException code because thats what the uppper // layers expect. This is UGLY, there should just be one set of // error codes. switch (rc) { case BookieProtocol.EOK: rc = BKException.Code.OK; break; case BookieProtocol.EBADVERSION: rc = BKException.Code.ProtocolVersionException; break; case BookieProtocol.EFENCED: rc = BKException.Code.LedgerFencedException; break; case BookieProtocol.EUA: rc = BKException.Code.UnauthorizedAccessException; break; case BookieProtocol.EREADONLY: rc = BKException.Code.WriteOnReadOnlyBookieException; break; default: LOG.warn("Add for ledger: {}, entry: {} failed on bookie: {}" + " with unknown code: {}", new Object[] { ledgerId, entryId, addr, rc }); rc = BKException.Code.WriteException; break; } AddCompletion ac; ac = addCompletions.remove(new CompletionKey(ledgerId, entryId)); if (ac == null) { LOG.debug("Unexpected add response received from bookie: {} for ledger: {}" + ", entry: {}, ignoring", new Object[] { addr, ledgerId, entryId }); return; } // totalBytesOutstanding.addAndGet(-ac.size); ac.cb.writeComplete(rc, ledgerId, entryId, addr, ac.ctx); } void handleReadResponse(long ledgerId, long entryId, int rc, ChannelBuffer buffer) { if (LOG.isDebugEnabled()) { LOG.debug("Got response for read request from bookie: {} for ledger: {} entry: {}" + " rc: {} entry length: {}", new Object[] { addr, ledgerId, entryId, rc, buffer.readableBytes() }); } // convert to BKException code because thats what the uppper // layers expect. This is UGLY, there should just be one set of // error codes. if (rc == BookieProtocol.EOK) { rc = BKException.Code.OK; } else if (rc == BookieProtocol.ENOENTRY || rc == BookieProtocol.ENOLEDGER) { rc = BKException.Code.NoSuchEntryException; } else if (rc == BookieProtocol.EBADVERSION) { rc = BKException.Code.ProtocolVersionException; } else if (rc == BookieProtocol.EUA) { rc = BKException.Code.UnauthorizedAccessException; } else { LOG.warn("Read for ledger: {}, entry: {} failed on bookie: {}" + " with unknown code: {}", new Object[] { ledgerId, entryId, addr, rc }); rc = BKException.Code.ReadException; } CompletionKey key = new CompletionKey(ledgerId, entryId); ReadCompletion readCompletion = readCompletions.remove(key); if (readCompletion == null) { /* * This is a special case. When recovering a ledger, a client * submits a read request with id -1, and receives a response with a * different entry id. */ readCompletion = readCompletions.remove(new CompletionKey(ledgerId, BookieProtocol.LAST_ADD_CONFIRMED)); } if (readCompletion == null) { LOG.debug("Unexpected read response received from bookie: {} for ledger: {}" + ", entry: {} , ignoring", new Object[] { addr, ledgerId, entryId }); return; } readCompletion.cb.readEntryComplete(rc, ledgerId, entryId, buffer.slice(), readCompletion.ctx); } /** * Boiler-plate wrapper classes follow * */ // visible for testing static class ReadCompletion { final ReadEntryCallback cb; final Object ctx; public ReadCompletion(ReadEntryCallback cb, Object ctx) { this.cb = cb; this.ctx = ctx; } } // visible for testing static class AddCompletion { final WriteCallback cb; //final long size; final Object ctx; public AddCompletion(WriteCallback cb, long size, Object ctx) { this.cb = cb; //this.size = size; this.ctx = ctx; } } // visable for testing CompletionKey newCompletionKey(long ledgerId, long entryId) { return new CompletionKey(ledgerId, entryId); } // visable for testing static class CompletionKey { long ledgerId; long entryId; final long requestAt; CompletionKey(long ledgerId, long entryId) { this.ledgerId = ledgerId; this.entryId = entryId; this.requestAt = MathUtils.nowInNano(); } @Override public boolean equals(Object obj) { if (!(obj instanceof CompletionKey) || obj == null) { return false; } CompletionKey that = (CompletionKey) obj; return this.ledgerId == that.ledgerId && this.entryId == that.entryId; } @Override public int hashCode() { return ((int) ledgerId << 16) ^ ((int) entryId); } @Override public String toString() { return String.format("LedgerEntry(%d, %d)", ledgerId, entryId); } public boolean shouldTimeout(long timeout) { return elapsedTime() >= timeout; } public long elapsedTime() { return MathUtils.elapsedMSec(requestAt); } } } ServerStats.java000066400000000000000000000053551244507361200337410ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/proto/** * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.proto; import org.apache.bookkeeper.util.MathUtils; public class ServerStats { private static ServerStats instance = new ServerStats(); private long packetsSent; private long packetsReceived; private long maxLatency; private long minLatency = Long.MAX_VALUE; private long totalLatency = 0; private long count = 0; static public ServerStats getInstance() { return instance; } protected ServerStats() { } // getters synchronized public long getMinLatency() { return (minLatency == Long.MAX_VALUE) ? 0 : minLatency; } synchronized public long getAvgLatency() { if (count != 0) return totalLatency / count; return 0; } synchronized public long getMaxLatency() { return maxLatency; } synchronized public long getPacketsReceived() { return packetsReceived; } synchronized public long getPacketsSent() { return packetsSent; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("Latency min/avg/max: " + getMinLatency() + "/" + getAvgLatency() + "/" + getMaxLatency() + "\n"); sb.append("Received: " + getPacketsReceived() + "\n"); sb.append("Sent: " + getPacketsSent() + "\n"); return sb.toString(); } synchronized void updateLatency(long requestCreateTime) { long latency = MathUtils.now() - requestCreateTime; totalLatency += latency; count++; if (latency < minLatency) { minLatency = latency; } if (latency > maxLatency) { maxLatency = latency; } } synchronized public void resetLatency() { totalLatency = count = maxLatency = 0; minLatency = Long.MAX_VALUE; } synchronized public void resetMaxLatency() { maxLatency = getMinLatency(); } synchronized public void incrementPacketsReceived() { packetsReceived++; } synchronized public void incrementPacketsSent() { packetsSent++; } synchronized public void resetRequestCounters() { packetsReceived = packetsSent = 0; } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/replication/000077500000000000000000000000001244507361200320265ustar00rootroot00000000000000Auditor.java000066400000000000000000000572621244507361200342350ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.io.IOException; import java.util.Collection; import java.util.List; import java.util.ArrayList; import java.util.Map; import java.util.Set; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ThreadFactory; import java.net.InetSocketAddress; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerChecker; import org.apache.bookkeeper.client.LedgerFragment; import org.apache.bookkeeper.client.BookiesListener; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.replication.ReplicationException.BKAuditException; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.commons.collections.CollectionUtils; import com.google.common.collect.Sets; import com.google.common.annotations.VisibleForTesting; import com.google.common.util.concurrent.SettableFuture; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.AsyncCallback; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Auditor is a single entity in the entire Bookie cluster and will be watching * all the bookies under 'ledgerrootpath/available' zkpath. When any of the * bookie failed or disconnected from zk, he will start initiating the * re-replication activities by keeping all the corresponding ledgers of the * failed bookie as underreplicated znode in zk. */ public class Auditor implements BookiesListener { private static final Logger LOG = LoggerFactory.getLogger(Auditor.class); private final ServerConfiguration conf; private BookKeeper bkc; private BookKeeperAdmin admin; private BookieLedgerIndexer bookieLedgerIndexer; private LedgerManager ledgerManager; private LedgerUnderreplicationManager ledgerUnderreplicationManager; private final ScheduledExecutorService executor; private List knownBookies = new ArrayList(); private final String bookieIdentifier; public Auditor(final String bookieIdentifier, ServerConfiguration conf, ZooKeeper zkc) throws UnavailableException { this.conf = conf; this.bookieIdentifier = bookieIdentifier; initialize(conf, zkc); executor = Executors.newSingleThreadScheduledExecutor(new ThreadFactory() { @Override public Thread newThread(Runnable r) { Thread t = new Thread(r, "AuditorBookie-" + bookieIdentifier); t.setDaemon(true); return t; } }); } private void initialize(ServerConfiguration conf, ZooKeeper zkc) throws UnavailableException { try { LedgerManagerFactory ledgerManagerFactory = LedgerManagerFactory .newLedgerManagerFactory(conf, zkc); ledgerManager = ledgerManagerFactory.newLedgerManager(); this.bookieLedgerIndexer = new BookieLedgerIndexer(ledgerManager); this.ledgerUnderreplicationManager = ledgerManagerFactory .newLedgerUnderreplicationManager(); this.bkc = new BookKeeper(new ClientConfiguration(conf), zkc); this.admin = new BookKeeperAdmin(bkc); } catch (CompatibilityException ce) { throw new UnavailableException( "CompatibilityException while initializing Auditor", ce); } catch (IOException ioe) { throw new UnavailableException( "IOException while initializing Auditor", ioe); } catch (KeeperException ke) { throw new UnavailableException( "KeeperException while initializing Auditor", ke); } catch (InterruptedException ie) { throw new UnavailableException( "Interrupted while initializing Auditor", ie); } } private void submitShutdownTask() { synchronized (this) { if (executor.isShutdown()) { return; } executor.submit(new Runnable() { public void run() { synchronized (Auditor.this) { executor.shutdown(); } } }); } } @VisibleForTesting synchronized Future submitAuditTask() { if (executor.isShutdown()) { SettableFuture f = SettableFuture.create(); f.setException(new BKAuditException("Auditor shutting down")); return f; } return executor.submit(new Runnable() { public void run() { try { waitIfLedgerReplicationDisabled(); List availableBookies = getAvailableBookies(); // casting to String, as knownBookies and availableBookies // contains only String values // find new bookies(if any) and update the known bookie list Collection newBookies = CollectionUtils.subtract( availableBookies, knownBookies); knownBookies.addAll(newBookies); // find lost bookies(if any) Collection lostBookies = CollectionUtils.subtract( knownBookies, availableBookies); if (lostBookies.size() > 0) { knownBookies.removeAll(lostBookies); auditBookies(); } } catch (BKException bke) { LOG.error("Exception getting bookie list", bke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); LOG.error("Interrupted while watching available bookies ", ie); } catch (BKAuditException bke) { LOG.error("Exception while watching available bookies", bke); } catch (UnavailableException ue) { LOG.error("Exception while watching available bookies", ue); } catch (KeeperException ke) { LOG.error("Exception reading bookie list", ke); } } }); } public void start() { LOG.info("I'm starting as Auditor Bookie. ID: {}", bookieIdentifier); // on startup watching available bookie and based on the // available bookies determining the bookie failures. synchronized (this) { if (executor.isShutdown()) { return; } long interval = conf.getAuditorPeriodicCheckInterval(); if (interval > 0) { LOG.info("Auditor periodic ledger checking enabled" + " 'auditorPeriodicCheckInterval' {} seconds", interval); executor.scheduleAtFixedRate(new Runnable() { public void run() { LOG.info("Running periodic check"); try { if (!ledgerUnderreplicationManager.isLedgerReplicationEnabled()) { LOG.info("Ledger replication disabled, skipping"); return; } checkAllLedgers(); } catch (KeeperException ke) { LOG.error("Exception while running periodic check", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); LOG.error("Interrupted while running periodic check", ie); } catch (BKAuditException bkae) { LOG.error("Exception while running periodic check", bkae); } catch (BKException bke) { LOG.error("Exception running periodic check", bke); } catch (IOException ioe) { LOG.error("I/O exception running periodic check", ioe); } catch (ReplicationException.UnavailableException ue) { LOG.error("Underreplication manager unavailable " +"running periodic check", ue); } } }, interval, interval, TimeUnit.SECONDS); } else { LOG.info("Periodic checking disabled"); } try { knownBookies = getAvailableBookies(); } catch (BKException bke) { LOG.error("Couldn't get bookie list, exiting", bke); submitShutdownTask(); } long bookieCheckInterval = conf.getAuditorPeriodicBookieCheckInterval(); if (bookieCheckInterval == 0) { LOG.info("Auditor periodic bookie checking disabled, running once check now anyhow"); executor.submit(BOOKIE_CHECK); } else { LOG.info("Auditor periodic bookie checking enabled" + " 'auditorPeriodicBookieCheckInterval' {} seconds", bookieCheckInterval); executor.scheduleAtFixedRate(BOOKIE_CHECK, 0, bookieCheckInterval, TimeUnit.SECONDS); } } } private void waitIfLedgerReplicationDisabled() throws UnavailableException, InterruptedException { ReplicationEnableCb cb = new ReplicationEnableCb(); if (!ledgerUnderreplicationManager.isLedgerReplicationEnabled()) { ledgerUnderreplicationManager.notifyLedgerReplicationEnabled(cb); cb.await(); } } private List getAvailableBookies() throws BKException { // Get the available bookies, also watch for further changes // Watching on only available bookies is sufficient, as changes in readonly bookies also changes in available // bookies admin.notifyBookiesChanged(this); Collection availableBkAddresses = admin.getAvailableBookies(); Collection readOnlyBkAddresses = admin.getReadOnlyBookies(); availableBkAddresses.addAll(readOnlyBkAddresses); List availableBookies = new ArrayList(); for (InetSocketAddress addr : availableBkAddresses) { availableBookies.add(StringUtils.addrToString(addr)); } return availableBookies; } @SuppressWarnings("unchecked") private void auditBookies() throws BKAuditException, KeeperException, InterruptedException, BKException { try { waitIfLedgerReplicationDisabled(); } catch (UnavailableException ue) { LOG.error("Underreplication unavailable, skipping audit." + "Will retry after a period"); return; } // put exit cases here Map> ledgerDetails = generateBookie2LedgersIndex(); try { if (!ledgerUnderreplicationManager.isLedgerReplicationEnabled()) { // has been disabled while we were generating the index // discard this run, and schedule a new one executor.submit(BOOKIE_CHECK); return; } } catch (UnavailableException ue) { LOG.error("Underreplication unavailable, skipping audit." + "Will retry after a period"); return; } List availableBookies = getAvailableBookies(); // find lost bookies Set knownBookies = ledgerDetails.keySet(); Collection lostBookies = CollectionUtils.subtract(knownBookies, availableBookies); if (lostBookies.size() > 0) handleLostBookies(lostBookies, ledgerDetails); } private Map> generateBookie2LedgersIndex() throws BKAuditException { return bookieLedgerIndexer.getBookieToLedgerIndex(); } private void handleLostBookies(Collection lostBookies, Map> ledgerDetails) throws BKAuditException { LOG.info("Following are the failed bookies: " + lostBookies + " and searching its ledgers for re-replication"); for (String bookieIP : lostBookies) { // identify all the ledgers in bookieIP and publishing these ledgers // as under-replicated. publishSuspectedLedgers(bookieIP, ledgerDetails.get(bookieIP)); } } private void publishSuspectedLedgers(String bookieIP, Set ledgers) throws BKAuditException { if (null == ledgers || ledgers.size() == 0) { // there is no ledgers available for this bookie and just // ignoring the bookie failures LOG.info("There is no ledgers for the failed bookie: " + bookieIP); return; } LOG.info("Following ledgers: " + ledgers + " of bookie: " + bookieIP + " are identified as underreplicated"); for (Long ledgerId : ledgers) { try { ledgerUnderreplicationManager.markLedgerUnderreplicated( ledgerId, bookieIP); } catch (UnavailableException ue) { throw new BKAuditException( "Failed to publish underreplicated ledger: " + ledgerId + " of bookie: " + bookieIP, ue); } } } /** * Process the result returned from checking a ledger */ private class ProcessLostFragmentsCb implements GenericCallback> { final LedgerHandle lh; final AsyncCallback.VoidCallback callback; ProcessLostFragmentsCb(LedgerHandle lh, AsyncCallback.VoidCallback callback) { this.lh = lh; this.callback = callback; } public void operationComplete(int rc, Set fragments) { try { if (rc == BKException.Code.OK) { Set bookies = Sets.newHashSet(); for (LedgerFragment f : fragments) { bookies.add(f.getAddress()); } for (InetSocketAddress bookie : bookies) { publishSuspectedLedgers(StringUtils.addrToString(bookie), Sets.newHashSet(lh.getId())); } } lh.close(); } catch (BKException bke) { LOG.error("Error closing lh", bke); if (rc == BKException.Code.OK) { rc = BKException.Code.ReplicationException; } } catch (InterruptedException ie) { LOG.error("Interrupted publishing suspected ledger", ie); Thread.currentThread().interrupt(); if (rc == BKException.Code.OK) { rc = BKException.Code.InterruptedException; } } catch (BKAuditException bkae) { LOG.error("Auditor exception publishing suspected ledger", bkae); if (rc == BKException.Code.OK) { rc = BKException.Code.ReplicationException; } } callback.processResult(rc, null, null); } } /** * List all the ledgers and check them individually. This should not * be run very often. */ void checkAllLedgers() throws BKAuditException, BKException, IOException, InterruptedException, KeeperException { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(conf.getZkTimeout()); ZooKeeper newzk = ZkUtils.createConnectedZookeeperClient(conf.getZkServers(), w); final BookKeeper client = new BookKeeper(new ClientConfiguration(conf), newzk); final BookKeeperAdmin admin = new BookKeeperAdmin(client); try { final LedgerChecker checker = new LedgerChecker(client); final AtomicInteger returnCode = new AtomicInteger(BKException.Code.OK); final CountDownLatch processDone = new CountDownLatch(1); Processor checkLedgersProcessor = new Processor() { @Override public void process(final Long ledgerId, final AsyncCallback.VoidCallback callback) { try { if (!ledgerUnderreplicationManager.isLedgerReplicationEnabled()) { LOG.info("Ledger rereplication has been disabled, aborting periodic check"); processDone.countDown(); return; } } catch (ReplicationException.UnavailableException ue) { LOG.error("Underreplication manager unavailable " +"running periodic check", ue); processDone.countDown(); return; } LedgerHandle lh = null; try { lh = admin.openLedgerNoRecovery(ledgerId); checker.checkLedger(lh, new ProcessLostFragmentsCb(lh, callback)); } catch (BKException.BKNoSuchLedgerExistsException bknsle) { LOG.debug("Ledger was deleted before we could check it", bknsle); callback.processResult(BKException.Code.OK, null, null); return; } catch (BKException bke) { LOG.error("Couldn't open ledger " + ledgerId, bke); callback.processResult(BKException.Code.BookieHandleNotAvailableException, null, null); return; } catch (InterruptedException ie) { LOG.error("Interrupted opening ledger", ie); Thread.currentThread().interrupt(); callback.processResult(BKException.Code.InterruptedException, null, null); return; } finally { if (lh != null) { try { lh.close(); } catch (BKException bke) { LOG.warn("Couldn't close ledger " + ledgerId, bke); } catch (InterruptedException ie) { LOG.warn("Interrupted closing ledger " + ledgerId, ie); Thread.currentThread().interrupt(); } } } } }; ledgerManager.asyncProcessLedgers(checkLedgersProcessor, new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String s, Object obj) { returnCode.set(rc); processDone.countDown(); } }, null, BKException.Code.OK, BKException.Code.ReadException); try { processDone.await(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); throw new BKAuditException( "Exception while checking ledgers", e); } if (returnCode.get() != BKException.Code.OK) { throw BKException.create(returnCode.get()); } } finally { admin.close(); client.close(); newzk.close(); } } @Override public void availableBookiesChanged() { submitAuditTask(); } /** * Shutdown the auditor */ public void shutdown() { LOG.info("Shutting down auditor"); submitShutdownTask(); try { while (!executor.awaitTermination(30, TimeUnit.SECONDS)) { LOG.warn("Executor not shutting down, interrupting"); executor.shutdownNow(); } admin.close(); bkc.close(); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); LOG.warn("Interrupted while shutting down auditor bookie", ie); } catch (BKException bke) { LOG.warn("Exception while shutting down auditor bookie", bke); } } /** * Return true if auditor is running otherwise return false * * @return auditor status */ public boolean isRunning() { return !executor.isShutdown(); } private final Runnable BOOKIE_CHECK = new Runnable() { public void run() { try { auditBookies(); } catch (BKException bke) { LOG.error("Couldn't get bookie list, exiting", bke); submitShutdownTask(); } catch (KeeperException ke) { LOG.error("Exception while watching available bookies", ke); submitShutdownTask(); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); LOG.error("Interrupted while watching available bookies ", ie); submitShutdownTask(); } catch (BKAuditException bke) { LOG.error("Exception while watching available bookies", bke); submitShutdownTask(); } } }; } AuditorElector.java000066400000000000000000000340411244507361200355410ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.io.Serializable; import java.io.IOException; import java.net.InetSocketAddress; import org.apache.bookkeeper.proto.DataFormats.AuditorVoteFormat; import com.google.common.annotations.VisibleForTesting; import java.util.concurrent.Executors; import java.util.concurrent.ExecutorService; import java.util.concurrent.ThreadFactory; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.commons.lang.StringUtils; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.Watcher.Event.EventType; import org.apache.zookeeper.Watcher.Event.KeeperState; import org.apache.zookeeper.ZooDefs.Ids; import com.google.protobuf.TextFormat; import static com.google.common.base.Charsets.UTF_8; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Performing auditor election using Apache ZooKeeper. Using ZooKeeper as a * coordination service, when a bookie bids for auditor, it creates an ephemeral * sequential file (znode) on ZooKeeper and considered as their vote. Vote * format is 'V_sequencenumber'. Election will be done by comparing the * ephemeral sequential numbers and the bookie which has created the least znode * will be elected as Auditor. All the other bookies will be watching on their * predecessor znode according to the ephemeral sequence numbers. */ public class AuditorElector { private static final Logger LOG = LoggerFactory .getLogger(AuditorElector.class); // Represents the index of the auditor node private static final int AUDITOR_INDEX = 0; // Represents vote prefix private static final String VOTE_PREFIX = "V_"; // Represents path Separator private static final String PATH_SEPARATOR = "/"; private static final String ELECTION_ZNODE = "auditorelection"; // Represents urLedger path in zk private final String basePath; // Represents auditor election path in zk private final String electionPath; private final String bookieId; private final ServerConfiguration conf; private final ZooKeeper zkc; private final ExecutorService executor; private String myVote; Auditor auditor; private AtomicBoolean running = new AtomicBoolean(false); /** * AuditorElector for performing the auditor election * * @param bookieId * - bookie identifier, comprises HostAddress:Port * @param conf * - configuration * @param zkc * - ZK instance * @throws UnavailableException * throws unavailable exception while initializing the elector */ public AuditorElector(final String bookieId, ServerConfiguration conf, ZooKeeper zkc) throws UnavailableException { this.bookieId = bookieId; this.conf = conf; this.zkc = zkc; basePath = conf.getZkLedgersRootPath() + '/' + BookKeeperConstants.UNDER_REPLICATION_NODE; electionPath = basePath + '/' + ELECTION_ZNODE; createElectorPath(); executor = Executors.newSingleThreadExecutor(new ThreadFactory() { @Override public Thread newThread(Runnable r) { return new Thread(r, "AuditorElector-"+bookieId); } }); } private void createMyVote() throws KeeperException, InterruptedException { if (null == myVote || null == zkc.exists(myVote, false)) { AuditorVoteFormat.Builder builder = AuditorVoteFormat.newBuilder() .setBookieId(bookieId); myVote = zkc.create(getVotePath(PATH_SEPARATOR + VOTE_PREFIX), TextFormat.printToString(builder.build()).getBytes(UTF_8), Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); } } private String getVotePath(String vote) { return electionPath + vote; } private void createElectorPath() throws UnavailableException { try { if (zkc.exists(basePath, false) == null) { try { zkc.create(basePath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException.NodeExistsException nee) { // do nothing, someone else could have created it } } if (zkc.exists(getVotePath(""), false) == null) { try { zkc.create(getVotePath(""), new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException.NodeExistsException nee) { // do nothing, someone else could have created it } } } catch (KeeperException ke) { throw new UnavailableException( "Failed to initialize Auditor Elector", ke); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); throw new UnavailableException( "Failed to initialize Auditor Elector", ie); } } /** * Watching the predecessor bookies and will do election on predecessor node * deletion or expiration. */ private class ElectionWatcher implements Watcher { @Override public void process(WatchedEvent event) { if (event.getState() == KeeperState.Expired) { LOG.error("Lost ZK connection, shutting down"); submitShutdownTask(); } else if (event.getType() == EventType.NodeDeleted) { submitElectionTask(); } } } public void start() { running.set(true); submitElectionTask(); } /** * Run cleanup operations for the auditor elector. */ private void submitShutdownTask() { executor.submit(new Runnable() { public void run() { if (!running.compareAndSet(true, false)) { return; } LOG.info("Shutting down AuditorElector"); if (myVote != null) { try { zkc.delete(myVote, -1); } catch (InterruptedException ie) { LOG.warn("InterruptedException while deleting myVote: " + myVote, ie); } catch (KeeperException ke) { LOG.error("Exception while deleting myVote:" + myVote, ke); } } } }); } /** * Performing the auditor election using the ZooKeeper ephemeral sequential * znode. The bookie which has created the least sequential will be elect as * Auditor. */ @VisibleForTesting void submitElectionTask() { Runnable r = new Runnable() { public void run() { if (!running.get()) { return; } try { // creating my vote in zk. Vote format is 'V_numeric' createMyVote(); List children = zkc.getChildren(getVotePath(""), false); if (0 >= children.size()) { throw new IllegalArgumentException( "Atleast one bookie server should present to elect the Auditor!"); } // sorting in ascending order of sequential number Collections.sort(children, new ElectionComparator()); String voteNode = StringUtils.substringAfterLast(myVote, PATH_SEPARATOR); // starting Auditing service if (children.get(AUDITOR_INDEX).equals(voteNode)) { // update the auditor bookie id in the election path. This is // done for debugging purpose AuditorVoteFormat.Builder builder = AuditorVoteFormat.newBuilder() .setBookieId(bookieId); zkc.setData(getVotePath(""), TextFormat.printToString(builder.build()).getBytes(UTF_8), -1); auditor = new Auditor(bookieId, conf, zkc); auditor.start(); } else { // If not an auditor, will be watching to my predecessor and // looking the previous node deletion. Watcher electionWatcher = new ElectionWatcher(); int myIndex = children.indexOf(voteNode); int prevNodeIndex = myIndex - 1; if (null == zkc.exists(getVotePath(PATH_SEPARATOR) + children.get(prevNodeIndex), electionWatcher)) { // While adding, the previous znode doesn't exists. // Again going to election. submitElectionTask(); } } } catch (KeeperException e) { LOG.error("Exception while performing auditor election", e); submitShutdownTask(); } catch (InterruptedException e) { LOG.error("Interrupted while performing auditor election", e); Thread.currentThread().interrupt(); submitShutdownTask(); } catch (UnavailableException e) { LOG.error("Ledger underreplication manager unavailable during election", e); submitShutdownTask(); } } }; executor.submit(r); } @VisibleForTesting Auditor getAuditor() { return auditor; } /** * Query zookeeper for the currently elected auditor * @return the bookie id of the current auditor */ public static InetSocketAddress getCurrentAuditor(ServerConfiguration conf, ZooKeeper zk) throws KeeperException, InterruptedException, IOException { String electionRoot = conf.getZkLedgersRootPath() + '/' + BookKeeperConstants.UNDER_REPLICATION_NODE + '/' + ELECTION_ZNODE; List children = zk.getChildren(electionRoot, false); Collections.sort(children, new AuditorElector.ElectionComparator()); if (children.size() < 1) { return null; } String ledger = electionRoot + "/" + children.get(AUDITOR_INDEX); byte[] data = zk.getData(ledger, false, null); AuditorVoteFormat.Builder builder = AuditorVoteFormat.newBuilder(); TextFormat.merge(new String(data, UTF_8), builder); AuditorVoteFormat v = builder.build(); String[] parts = v.getBookieId().split(":"); return new InetSocketAddress(parts[0], Integer.valueOf(parts[1])); } /** * Shutting down AuditorElector */ public void shutdown() throws InterruptedException { synchronized (this) { if (executor.isShutdown()) { return; } submitShutdownTask(); executor.shutdown(); } if (auditor != null) { auditor.shutdown(); auditor = null; } } /** * If current bookie is running as auditor, return the status of the * auditor. Otherwise return the status of elector. * * @return */ public boolean isRunning() { if (auditor != null) { return auditor.isRunning(); } return running.get(); } /** * Compare the votes in the ascending order of the sequence number. Vote * format is 'V_sequencenumber', comparator will do sorting based on the * numeric sequence value. */ private static class ElectionComparator implements Comparator, Serializable { /** * Return -1 if the first vote is less than second. Return 1 if the * first vote is greater than second. Return 0 if the votes are equal. */ public int compare(String vote1, String vote2) { long voteSeqId1 = getVoteSequenceId(vote1); long voteSeqId2 = getVoteSequenceId(vote2); int result = voteSeqId1 < voteSeqId2 ? -1 : (voteSeqId1 > voteSeqId2 ? 1 : 0); return result; } private long getVoteSequenceId(String vote) { String voteId = StringUtils.substringAfter(vote, VOTE_PREFIX); return Long.parseLong(voteId); } } } AutoRecoveryMain.java000066400000000000000000000225441244507361200360550ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.io.File; import java.io.IOException; import java.net.MalformedURLException; import com.google.common.annotations.VisibleForTesting; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.ExitCode; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.commons.cli.BasicParser; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.Options; import org.apache.commons.cli.ParseException; import org.apache.commons.configuration.ConfigurationException; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Class to start/stop the AutoRecovery daemons Auditor and ReplicationWorker */ public class AutoRecoveryMain { private static final Logger LOG = LoggerFactory .getLogger(AutoRecoveryMain.class); private ServerConfiguration conf; ZooKeeper zk; AuditorElector auditorElector; ReplicationWorker replicationWorker; private AutoRecoveryDeathWatcher deathWatcher; private int exitCode; private volatile boolean shuttingDown = false; private volatile boolean running = false; public AutoRecoveryMain(ServerConfiguration conf) throws IOException, InterruptedException, KeeperException, UnavailableException, CompatibilityException { this.conf = conf; ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(conf.getZkTimeout()) { @Override public void process(WatchedEvent event) { // Check for expired connection. if (event.getState().equals(Watcher.Event.KeeperState.Expired)) { LOG.error("ZK client connection to the" + " ZK server has expired!"); shutdown(ExitCode.ZK_EXPIRED); } else { super.process(event); } } }; zk = ZkUtils.createConnectedZookeeperClient(conf.getZkServers(), w); auditorElector = new AuditorElector( StringUtils.addrToString(Bookie.getBookieAddress(conf)), conf, zk); replicationWorker = new ReplicationWorker(zk, conf, Bookie.getBookieAddress(conf)); deathWatcher = new AutoRecoveryDeathWatcher(this); } /* * Start daemons */ public void start() throws UnavailableException { auditorElector.start(); replicationWorker.start(); deathWatcher.start(); running = true; } /* * Waits till all daemons joins */ public void join() throws InterruptedException { deathWatcher.join(); } /* * Shutdown all daemons gracefully */ public void shutdown() { shutdown(ExitCode.OK); } private void shutdown(int exitCode) { if (shuttingDown) { return; } shuttingDown = true; running = false; this.exitCode = exitCode; try { deathWatcher.interrupt(); deathWatcher.join(); auditorElector.shutdown(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); LOG.warn("Interrupted shutting down auto recovery", e); } replicationWorker.shutdown(); try { zk.close(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); LOG.warn("Interrupted shutting down auto recovery", e); } } private int getExitCode() { return exitCode; } @VisibleForTesting public Auditor getAuditor() { return auditorElector.getAuditor(); } /** Is auto-recovery service running? */ public boolean isAutoRecoveryRunning() { return running; } /* * DeathWatcher for AutoRecovery daemons. */ private static class AutoRecoveryDeathWatcher extends Thread { private int watchInterval; private AutoRecoveryMain autoRecoveryMain; public AutoRecoveryDeathWatcher(AutoRecoveryMain autoRecoveryMain) { super("AutoRecoveryDeathWatcher-" + autoRecoveryMain.conf.getBookiePort()); this.autoRecoveryMain = autoRecoveryMain; watchInterval = autoRecoveryMain.conf.getDeathWatchInterval(); } @Override public void run() { while (true) { try { Thread.sleep(watchInterval); } catch (InterruptedException ie) { break; } // If any one service not running, then shutdown peer. if (!autoRecoveryMain.auditorElector.isRunning() || !autoRecoveryMain.replicationWorker.isRunning()) { autoRecoveryMain.shutdown(ExitCode.SERVER_EXCEPTION); break; } } } } private static final Options opts = new Options(); static { opts.addOption("c", "conf", true, "Bookie server configuration"); opts.addOption("h", "help", false, "Print help message"); } /* * Print usage */ private static void printUsage() { HelpFormatter hf = new HelpFormatter(); hf.printHelp("AutoRecoveryMain [options]\n", opts); } /* * load configurations from file. */ private static void loadConfFile(ServerConfiguration conf, String confFile) throws IllegalArgumentException { try { conf.loadConf(new File(confFile).toURI().toURL()); } catch (MalformedURLException e) { LOG.error("Could not open configuration file: " + confFile, e); throw new IllegalArgumentException(); } catch (ConfigurationException e) { LOG.error("Malformed configuration file: " + confFile, e); throw new IllegalArgumentException(); } LOG.info("Using configuration file " + confFile); } /* * Parse console args */ private static ServerConfiguration parseArgs(String[] args) throws IllegalArgumentException { try { BasicParser parser = new BasicParser(); CommandLine cmdLine = parser.parse(opts, args); if (cmdLine.hasOption('h')) { throw new IllegalArgumentException(); } ServerConfiguration conf = new ServerConfiguration(); String[] leftArgs = cmdLine.getArgs(); if (cmdLine.hasOption('c')) { if (null != leftArgs && leftArgs.length > 0) { throw new IllegalArgumentException(); } String confFile = cmdLine.getOptionValue("c"); loadConfFile(conf, confFile); } if (null != leftArgs && leftArgs.length > 0) { throw new IllegalArgumentException(); } return conf; } catch (ParseException e) { throw new IllegalArgumentException(e); } } public static void main(String[] args) { ServerConfiguration conf = null; try { conf = parseArgs(args); } catch (IllegalArgumentException iae) { LOG.error("Error parsing command line arguments : ", iae); System.err.println(iae.getMessage()); printUsage(); System.exit(ExitCode.INVALID_CONF); } try { final AutoRecoveryMain autoRecoveryMain = new AutoRecoveryMain(conf); autoRecoveryMain.start(); Runtime.getRuntime().addShutdownHook(new Thread() { @Override public void run() { autoRecoveryMain.shutdown(); LOG.info("Shutdown AutoRecoveryMain successfully"); } }); LOG.info("Register shutdown hook successfully"); autoRecoveryMain.join(); System.exit(autoRecoveryMain.getExitCode()); } catch (Exception e) { LOG.error("Exception running AutoRecoveryMain : ", e); System.exit(ExitCode.SERVER_EXCEPTION); } } } BookieLedgerIndexer.java000066400000000000000000000134221244507361200364660ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/replication/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.replication; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CountDownLatch; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.replication.ReplicationException.BKAuditException; import org.apache.bookkeeper.util.StringUtils; import org.apache.zookeeper.AsyncCallback; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Preparing bookie vs its corresponding ledgers. This will always look up the * ledgermanager for ledger metadata and will generate indexes. */ public class BookieLedgerIndexer { private static final Logger LOG = LoggerFactory.getLogger(BookieLedgerIndexer.class); private final LedgerManager ledgerManager; public BookieLedgerIndexer(LedgerManager ledgerManager) { this.ledgerManager = ledgerManager; } /** * Generating bookie vs its ledgers map by reading all the ledgers in each * bookie and parsing its metadata. * * @return bookie2ledgersMap map of bookie vs ledgers * @throws BKAuditException * exception while getting bookie-ledgers */ public Map> getBookieToLedgerIndex() throws BKAuditException { // bookie vs ledgers map final ConcurrentHashMap> bookie2ledgersMap = new ConcurrentHashMap>(); final CountDownLatch ledgerCollectorLatch = new CountDownLatch(1); Processor ledgerProcessor = new Processor() { @Override public void process(final Long ledgerId, final AsyncCallback.VoidCallback iterCallback) { GenericCallback genericCallback = new GenericCallback() { @Override public void operationComplete(final int rc, LedgerMetadata ledgerMetadata) { if (rc == BKException.Code.OK) { for (Map.Entry> ensemble : ledgerMetadata .getEnsembles().entrySet()) { for (InetSocketAddress bookie : ensemble .getValue()) { putLedger(bookie2ledgersMap, StringUtils.addrToString(bookie), ledgerId); } } } else { LOG.warn("Unable to read the ledger:" + ledgerId + " information"); } iterCallback.processResult(rc, null, null); } }; ledgerManager.readLedgerMetadata(ledgerId, genericCallback); } }; // Reading the result after processing all the ledgers final List resultCode = new ArrayList(1); ledgerManager.asyncProcessLedgers(ledgerProcessor, new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String s, Object obj) { resultCode.add(rc); ledgerCollectorLatch.countDown(); } }, null, BKException.Code.OK, BKException.Code.ReadException); try { ledgerCollectorLatch.await(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); throw new BKAuditException( "Exception while getting the bookie-ledgers", e); } if (!resultCode.contains(BKException.Code.OK)) { throw new BKAuditException( "Exception while getting the bookie-ledgers", BKException .create(resultCode.get(0))); } return bookie2ledgersMap; } private void putLedger(ConcurrentHashMap> bookie2ledgersMap, String bookie, long ledgerId) { Set ledgers = bookie2ledgersMap.get(bookie); // creates an empty list and add to bookie for keeping its ledgers if (ledgers == null) { ledgers = Collections.synchronizedSet(new HashSet()); Set oldLedgers = bookie2ledgersMap.putIfAbsent(bookie, ledgers); if (oldLedgers != null) { ledgers = oldLedgers; } } ledgers.add(ledgerId); } } ReplicationEnableCb.java000066400000000000000000000036531244507361200364460ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.util.concurrent.CountDownLatch; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Callback which is getting notified when the replication process is enabled */ public class ReplicationEnableCb implements GenericCallback { private static final Logger LOG = LoggerFactory .getLogger(ReplicationEnableCb.class); private final CountDownLatch latch = new CountDownLatch(1); @Override public void operationComplete(int rc, Void result) { latch.countDown(); LOG.debug("Automatic ledger re-replication is enabled"); } /** * This is a blocking call and causes the current thread to wait until the * replication process is enabled * * @throws InterruptedException * interrupted while waiting */ public void await() throws InterruptedException { LOG.debug("Automatic ledger re-replication is disabled. " + "Hence waiting until its enabled!"); latch.await(); } } ReplicationException.java000066400000000000000000000046571244507361200367560ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/replication/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.replication; /** * Exceptions for use within the replication service */ public abstract class ReplicationException extends Exception { protected ReplicationException(String message, Throwable cause) { super(message, cause); } protected ReplicationException(String message) { super(message); } /** * The replication service has become unavailable */ public static class UnavailableException extends ReplicationException { private static final long serialVersionUID = 31872209L; public UnavailableException(String message, Throwable cause) { super(message, cause); } public UnavailableException(String message) { super(message); } } /** * Compatibility error. This version of the code, doesn't know how to * deal with the metadata it has found. */ public static class CompatibilityException extends ReplicationException { private static final long serialVersionUID = 98551903L; public CompatibilityException(String message, Throwable cause) { super(message, cause); } public CompatibilityException(String message) { super(message); } } /** * Exception while auditing bookie-ledgers */ static class BKAuditException extends ReplicationException { private static final long serialVersionUID = 95551905L; BKAuditException(String message, Throwable cause) { super(message, cause); } BKAuditException(String message) { super(message); } } } ReplicationWorker.java000066400000000000000000000410001244507361200362500ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/replication/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.io.IOException; import java.net.InetSocketAddress; import java.util.List; import java.util.Set; import java.util.Timer; import java.util.TimerTask; import java.util.SortedMap; import java.util.ArrayList; import java.util.Collection; import java.util.concurrent.CountDownLatch; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.client.LedgerChecker; import org.apache.bookkeeper.client.LedgerFragment; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.client.BKException.BKBookieHandleNotAvailableException; import org.apache.bookkeeper.client.BKException.BKNoSuchLedgerExistsException; import org.apache.bookkeeper.client.BKException.BKReadException; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * ReplicationWorker will take the fragments one by one from * ZKLedgerUnderreplicationManager and replicates to it. */ public class ReplicationWorker implements Runnable { private static Logger LOG = LoggerFactory .getLogger(ReplicationWorker.class); final private LedgerUnderreplicationManager underreplicationManager; private ServerConfiguration conf; private ZooKeeper zkc; private volatile boolean workerRunning = false; final private BookKeeperAdmin admin; private LedgerChecker ledgerChecker; private InetSocketAddress targetBookie; private BookKeeper bkc; private Thread workerThread; private long openLedgerRereplicationGracePeriod; private Timer pendingReplicationTimer; /** * Replication worker for replicating the ledger fragments from * UnderReplicationManager to the targetBookie. This target bookie will be a * local bookie. * * @param zkc * - ZK instance * @param conf * - configurations * @param targetBKAddr * - to where replication should happen. Ideally this will be * local Bookie address. */ public ReplicationWorker(final ZooKeeper zkc, final ServerConfiguration conf, InetSocketAddress targetBKAddr) throws CompatibilityException, KeeperException, InterruptedException, IOException { this.zkc = zkc; this.conf = conf; this.targetBookie = targetBKAddr; LedgerManagerFactory mFactory = LedgerManagerFactory .newLedgerManagerFactory(this.conf, this.zkc); this.underreplicationManager = mFactory .newLedgerUnderreplicationManager(); this.bkc = new BookKeeper(new ClientConfiguration(conf), zkc); this.admin = new BookKeeperAdmin(bkc); this.ledgerChecker = new LedgerChecker(bkc); this.workerThread = new Thread(this, "ReplicationWorker"); this.openLedgerRereplicationGracePeriod = conf .getOpenLedgerRereplicationGracePeriod(); this.pendingReplicationTimer = new Timer("PendingReplicationTimer"); } /** Start the replication worker */ public void start() { this.workerThread.start(); } @Override public void run() { workerRunning = true; while (workerRunning) { try { rereplicate(); } catch (InterruptedException e) { shutdown(); Thread.currentThread().interrupt(); LOG.info("InterruptedException " + "while replicating fragments", e); return; } catch (BKException e) { shutdown(); LOG.error("BKException while replicating fragments", e); return; } catch (UnavailableException e) { shutdown(); LOG.error("UnavailableException " + "while replicating fragments", e); return; } } } /** * Replicates the under replicated fragments from failed bookie ledger to * targetBookie */ private void rereplicate() throws InterruptedException, BKException, UnavailableException { long ledgerIdToReplicate = underreplicationManager .getLedgerToRereplicate(); LOG.debug("Going to replicate the fragments of the ledger: {}", ledgerIdToReplicate); LedgerHandle lh; try { lh = admin.openLedgerNoRecovery(ledgerIdToReplicate); } catch (BKNoSuchLedgerExistsException e) { // Ledger might have been deleted by user LOG.info("BKNoSuchLedgerExistsException while opening " + "ledger for replication. Other clients " + "might have deleted the ledger. " + "So, no harm to continue"); underreplicationManager.markLedgerReplicated(ledgerIdToReplicate); return; } catch (BKReadException e) { LOG.info("BKReadException while" + " opening ledger for replication." + " Enough Bookies might not have available" + "So, no harm to continue"); underreplicationManager .releaseUnderreplicatedLedger(ledgerIdToReplicate); return; } catch (BKBookieHandleNotAvailableException e) { LOG.info("BKBookieHandleNotAvailableException while" + " opening ledger for replication." + " Enough Bookies might not have available" + "So, no harm to continue"); underreplicationManager .releaseUnderreplicatedLedger(ledgerIdToReplicate); return; } Set fragments = getUnderreplicatedFragments(lh); LOG.debug("Founds fragments {} for replication from ledger: {}", fragments, ledgerIdToReplicate); boolean foundOpenFragments = false; for (LedgerFragment ledgerFragment : fragments) { if (!ledgerFragment.isClosed()) { foundOpenFragments = true; continue; } else if (isTargetBookieExistsInFragmentEnsemble(lh, ledgerFragment)) { LOG.debug("Target Bookie[{}] found in the fragment ensemble: {}", targetBookie, ledgerFragment.getEnsemble()); continue; } try { admin.replicateLedgerFragment(lh, ledgerFragment, targetBookie); } catch (BKException.BKBookieHandleNotAvailableException e) { LOG.warn("BKBookieHandleNotAvailableException " + "while replicating the fragment", e); } catch (BKException.BKLedgerRecoveryException e) { LOG.warn("BKLedgerRecoveryException " + "while replicating the fragment", e); if (admin.getReadOnlyBookies().contains(targetBookie)) { throw new BKException.BKWriteOnReadOnlyBookieException(); } } } if (foundOpenFragments || isLastSegmentOpenAndMissingBookies(lh)) { deferLedgerLockRelease(ledgerIdToReplicate); return; } fragments = getUnderreplicatedFragments(lh); if (fragments.size() == 0) { LOG.info("Ledger replicated successfully. ledger id is: " + ledgerIdToReplicate); underreplicationManager.markLedgerReplicated(ledgerIdToReplicate); } else { // Releasing the underReplication ledger lock and compete // for the replication again for the pending fragments underreplicationManager .releaseUnderreplicatedLedger(ledgerIdToReplicate); } } /** * When checking the fragments of a ledger, there is a corner case * where if the last segment/ensemble is open, but nothing has been written to * some of the quorums in the ensemble, bookies can fail without any action being * taken. This is fine, until enough bookies fail to cause a quorum to become * unavailable, by which time the ledger is unrecoverable. * * For example, if in a E3Q2, only 1 entry is written and the last bookie * in the ensemble fails, nothing has been written to it, so nothing needs to be * recovered. But if the second to last bookie fails, we've now lost quorum for * the second entry, so it's impossible to see if the second has been written or * not. * * To avoid this situation, we need to check if bookies in the final open ensemble * are unavailable, and take action if so. The action to take is to close the ledger, * after a grace period as the writting client may replace the faulty bookie on its * own. * * Missing bookies in closed ledgers are fine, as we know the last confirmed add, so * we can tell which entries are supposed to exist and rereplicate them if necessary. */ private boolean isLastSegmentOpenAndMissingBookies(LedgerHandle lh) throws BKException { LedgerMetadata md = admin.getLedgerMetadata(lh); if (md.isClosed()) { return false; } SortedMap> ensembles = admin.getLedgerMetadata(lh).getEnsembles(); ArrayList finalEnsemble = ensembles.get(ensembles.lastKey()); Collection available = admin.getAvailableBookies(); for (InetSocketAddress b : finalEnsemble) { if (!available.contains(b)) { return true; } } return false; } /** Gets the under replicated fragments */ private Set getUnderreplicatedFragments(LedgerHandle lh) throws InterruptedException { CheckerCallback checkerCb = new CheckerCallback(); ledgerChecker.checkLedger(lh, checkerCb); Set fragments = checkerCb.waitAndGetResult(); return fragments; } /** * Schedules a timer task for releasing the lock which will be scheduled * after open ledger fragment replication time. Ledger will be fenced if it * is still in open state when timer task fired */ private void deferLedgerLockRelease(final long ledgerId) { long gracePeriod = this.openLedgerRereplicationGracePeriod; TimerTask timerTask = new TimerTask() { @Override public void run() { LedgerHandle lh = null; try { lh = admin.openLedgerNoRecovery(ledgerId); if (isLastSegmentOpenAndMissingBookies(lh)) { lh = admin.openLedger(ledgerId); } Set fragments = getUnderreplicatedFragments(lh); for (LedgerFragment fragment : fragments) { if (!fragment.isClosed()) { lh = admin.openLedger(ledgerId); break; } } } catch (InterruptedException e) { Thread.currentThread().interrupt(); LOG.info("InterruptedException " + "while replicating fragments", e); } catch (BKNoSuchLedgerExistsException bknsle) { LOG.debug("Ledger was deleted, safe to continue", bknsle); } catch (BKException e) { LOG.error("BKException while fencing the ledger" + " for rereplication of postponed ledgers", e); } finally { try { if (lh != null) { lh.close(); } } catch (InterruptedException e) { Thread.currentThread().interrupt(); LOG.info("InterruptedException while closing " + "ledger", e); } catch (BKException e) { // Lets go ahead and release the lock. Catch actual // exception in normal replication flow and take // action. LOG.warn("BKException while closing ledger ", e); } finally { try { underreplicationManager .releaseUnderreplicatedLedger(ledgerId); } catch (UnavailableException e) { shutdown(); LOG.error("UnavailableException " + "while replicating fragments", e); } } } } }; pendingReplicationTimer.schedule(timerTask, gracePeriod); } /** * Stop the replication worker service */ public void shutdown() { synchronized (this) { if (!workerRunning) { return; } workerRunning = false; } this.pendingReplicationTimer.cancel(); try { this.workerThread.interrupt(); this.workerThread.join(); } catch (InterruptedException e) { LOG.error("Interrupted during shutting down replication worker : ", e); Thread.currentThread().interrupt(); } try { bkc.close(); } catch (InterruptedException e) { LOG.warn("Interrupted while closing the Bookie client", e); Thread.currentThread().interrupt(); } catch (BKException e) { LOG.warn("Exception while closing the Bookie client", e); } try { underreplicationManager.close(); } catch (UnavailableException e) { LOG.warn("Exception while closing the " + "ZkLedgerUnderrepliationManager", e); } } /** * Gives the running status of ReplicationWorker */ boolean isRunning() { return workerRunning; } private boolean isTargetBookieExistsInFragmentEnsemble(LedgerHandle lh, LedgerFragment ledgerFragment) { List ensemble = ledgerFragment.getEnsemble(); for (InetSocketAddress bkAddr : ensemble) { if (targetBookie.equals(bkAddr)) { return true; } } return false; } /** Ledger checker call back */ private static class CheckerCallback implements GenericCallback> { private Set result = null; private CountDownLatch latch = new CountDownLatch(1); @Override public void operationComplete(int rc, Set result) { this.result = result; latch.countDown(); } /** * Wait until operation complete call back comes and return the ledger * fragments set */ Set waitAndGetResult() throws InterruptedException { latch.await(); return result; } } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/streaming/000077500000000000000000000000001244507361200315065ustar00rootroot00000000000000LedgerInputStream.java000066400000000000000000000133041244507361200356710ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/streaming/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.streaming; import java.io.IOException; import java.io.InputStream; import java.nio.ByteBuffer; import java.util.Enumeration; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class LedgerInputStream extends InputStream { Logger LOG = LoggerFactory.getLogger(LedgerInputStream.class); private LedgerHandle lh; private ByteBuffer bytebuff; byte[] bbytes; long lastEntry = 0; int increment = 50; int defaultSize = 1024 * 1024; // 1MB default size Enumeration ledgerSeq = null; /** * construct a outputstream from a ledger handle * * @param lh * ledger handle * @throws {@link BKException}, {@link InterruptedException} */ public LedgerInputStream(LedgerHandle lh) throws BKException, InterruptedException { this.lh = lh; bbytes = new byte[defaultSize]; this.bytebuff = ByteBuffer.wrap(bbytes); this.bytebuff.position(this.bytebuff.limit()); lastEntry = Math.min(lh.getLastAddConfirmed(), increment); ledgerSeq = lh.readEntries(0, lastEntry); } /** * construct a outputstream from a ledger handle * * @param lh * the ledger handle * @param size * the size of the buffer * @throws {@link BKException}, {@link InterruptedException} */ public LedgerInputStream(LedgerHandle lh, int size) throws BKException, InterruptedException { this.lh = lh; bbytes = new byte[size]; this.bytebuff = ByteBuffer.wrap(bbytes); this.bytebuff.position(this.bytebuff.limit()); lastEntry = Math.min(lh.getLastAddConfirmed(), increment); ledgerSeq = lh.readEntries(0, lastEntry); } /** * Method close currently doesn't do anything. The application * is supposed to open and close the ledger handle backing up * a stream ({@link LedgerHandle}). */ @Override public void close() { // do nothing // let the application // close the ledger } /** * refill the buffer, we need to read more bytes * * @return if we can refill or not */ private synchronized boolean refill() throws IOException { bytebuff.clear(); if (!ledgerSeq.hasMoreElements() && lastEntry >= lh.getLastAddConfirmed()) { return false; } if (!ledgerSeq.hasMoreElements()) { // do refill long last = Math.min(lastEntry + increment, lh.getLastAddConfirmed()); try { ledgerSeq = lh.readEntries(lastEntry + 1, last); } catch (BKException bk) { IOException ie = new IOException(bk.getMessage()); ie.initCause(bk); throw ie; } catch (InterruptedException ie) { Thread.currentThread().interrupt(); } lastEntry = last; } LedgerEntry le = ledgerSeq.nextElement(); bbytes = le.getEntry(); bytebuff = ByteBuffer.wrap(bbytes); return true; } @Override public synchronized int read() throws IOException { boolean toread = true; if (bytebuff.remaining() == 0) { // their are no remaining bytes toread = refill(); } if (toread) { int ret = 0xFF & bytebuff.get(); return ret; } return -1; } @Override public synchronized int read(byte[] b) throws IOException { // be smart ... just copy the bytes // once and return the size // user will call it again boolean toread = true; if (bytebuff.remaining() == 0) { toread = refill(); } if (toread) { int bcopied = bytebuff.remaining(); int tocopy = Math.min(bcopied, b.length); // cannot used gets because of // the underflow/overflow exceptions System.arraycopy(bbytes, bytebuff.position(), b, 0, tocopy); bytebuff.position(bytebuff.position() + tocopy); return tocopy; } return -1; } @Override public synchronized int read(byte[] b, int off, int len) throws IOException { // again dont need ot fully // fill b, just return // what we have and let the application call read // again boolean toread = true; if (bytebuff.remaining() == 0) { toread = refill(); } if (toread) { int bcopied = bytebuff.remaining(); int tocopy = Math.min(bcopied, len); System.arraycopy(bbytes, bytebuff.position(), b, off, tocopy); bytebuff.position(bytebuff.position() + tocopy); return tocopy; } return -1; } } LedgerOutputStream.java000066400000000000000000000107211244507361200360720ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/streaming/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.streaming; import java.io.IOException; import java.io.OutputStream; import java.nio.ByteBuffer; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerHandle; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * this class provides a streaming api to get an output stream from a ledger * handle and write to it as a stream of bytes. This is built on top of * ledgerhandle api and uses a buffer to cache the data written to it and writes * out the entry to the ledger. */ public class LedgerOutputStream extends OutputStream { Logger LOG = LoggerFactory.getLogger(LedgerOutputStream.class); private LedgerHandle lh; private ByteBuffer bytebuff; byte[] bbytes; int defaultSize = 1024 * 1024; // 1MB default size /** * construct a outputstream from a ledger handle * * @param lh * ledger handle */ public LedgerOutputStream(LedgerHandle lh) { this.lh = lh; bbytes = new byte[defaultSize]; this.bytebuff = ByteBuffer.wrap(bbytes); } /** * construct a outputstream from a ledger handle * * @param lh * the ledger handle * @param size * the size of the buffer */ public LedgerOutputStream(LedgerHandle lh, int size) { this.lh = lh; bbytes = new byte[size]; this.bytebuff = ByteBuffer.wrap(bbytes); } @Override public void close() { // flush everything // we have flush(); } @Override public synchronized void flush() { // lets flush all the data // into the ledger entry if (bytebuff.position() > 0) { // copy the bytes into // a new byte buffer and send it out byte[] b = new byte[bytebuff.position()]; LOG.info("Comment: flushing with params " + " " + bytebuff.position()); System.arraycopy(bbytes, 0, b, 0, bytebuff.position()); try { lh.addEntry(b); } catch (InterruptedException ie) { LOG.warn("Interrupted while flusing " + ie); Thread.currentThread().interrupt(); } catch (BKException bke) { LOG.warn("BookKeeper exception ", bke); } } } /** * make space for len bytes to be written to the buffer. * * @param len * @return if true then we can make space for len if false we cannot */ private boolean makeSpace(int len) { if (bytebuff.remaining() < len) { flush(); bytebuff.clear(); if (bytebuff.capacity() < len) { return false; } } return true; } @Override public synchronized void write(byte[] b) { if (makeSpace(b.length)) { bytebuff.put(b); } else { try { lh.addEntry(b); } catch (InterruptedException ie) { LOG.warn("Interrupted while writing", ie); Thread.currentThread().interrupt(); } catch (BKException bke) { LOG.warn("BookKeeper exception", bke); } } } @Override public synchronized void write(byte[] b, int off, int len) { if (!makeSpace(len)) { // lets try making the buffer bigger bbytes = new byte[len]; bytebuff = ByteBuffer.wrap(bbytes); } bytebuff.put(b, off, len); } @Override public synchronized void write(int b) throws IOException { makeSpace(1); byte oneB = (byte) (b & 0xFF); bytebuff.put(oneB); } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/tools/000077500000000000000000000000001244507361200306555ustar00rootroot00000000000000BookKeeperTools.java000066400000000000000000000070061244507361200345130ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/toolspackage org.apache.bookkeeper.tools; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.IOException; import org.apache.zookeeper.KeeperException; import java.net.InetSocketAddress; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.client.BKException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Provides Admin Tools to manage the BookKeeper cluster. * */ public class BookKeeperTools { private static Logger LOG = LoggerFactory.getLogger(BookKeeperTools.class); /** * Main method so we can invoke the bookie recovery via command line. * * @param args * Arguments to BookKeeperTools. 2 are required and the third is * optional. The first is a comma separated list of ZK server * host:port pairs. The second is the host:port socket address * for the bookie we are trying to recover. The third is the * host:port socket address of the optional destination bookie * server we want to replicate the data over to. * @throws InterruptedException * @throws IOException * @throws KeeperException * @throws BKException */ public static void main(String[] args) throws InterruptedException, IOException, KeeperException, BKException { // Validate the inputs if (args.length < 2) { System.err.println("USAGE: BookKeeperTools zkServers bookieSrc [bookieDest]"); return; } // Parse out the input arguments String zkServers = args[0]; String bookieSrcString[] = args[1].split(":"); if (bookieSrcString.length < 2) { System.err.println("BookieSrc inputted has invalid name format (host:port expected): " + args[1]); return; } final InetSocketAddress bookieSrc = new InetSocketAddress(bookieSrcString[0], Integer .parseInt(bookieSrcString[1])); InetSocketAddress bookieDest = null; if (args.length < 3) { String bookieDestString[] = args[2].split(":"); if (bookieDestString.length < 2) { System.err.println("BookieDest inputted has invalid name format (host:port expected): " + args[2]); return; } bookieDest = new InetSocketAddress(bookieDestString[0], Integer.parseInt(bookieDestString[1])); } // Create the BookKeeperTools instance and perform the bookie recovery // synchronously. BookKeeperAdmin bkTools = new BookKeeperAdmin(zkServers); bkTools.recoverBookieData(bookieSrc, bookieDest); // Shutdown the resources used in the BookKeeperTools instance. bkTools.close(); } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/000077500000000000000000000000001244507361200304725ustar00rootroot00000000000000BookKeeperConstants.java000066400000000000000000000036441244507361200352100ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.util; /** * This class contains constants used in BookKeeper */ public class BookKeeperConstants { // ////////////////////////// // /////Basic constants////// // ////////////////////////// public static final String LEDGER_NODE_PREFIX = "L"; public static final String COLON = ":"; public static final String VERSION_FILENAME = "VERSION"; public final static String PASSWD = "passwd"; public static final String CURRENT_DIR = "current"; public static final String READONLY = "readonly"; // ////////////////////////// // ///// Znodes////////////// // ////////////////////////// public static final String AVAILABLE_NODE = "available"; public static final String COOKIE_NODE = "cookies"; public static final String UNDER_REPLICATION_NODE = "underreplication"; public static final String DISABLE_NODE = "disable"; public static final String DEFAULT_ZK_LEDGERS_ROOT_PATH = "/ledgers"; public static final String LAYOUT_ZNODE = "LAYOUT"; public static final String INSTANCEID = "INSTANCEID"; } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/DiskChecker.java000066400000000000000000000131611244507361200335160ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.util; import java.io.File; import java.io.IOException; import com.google.common.annotations.VisibleForTesting; /** * Class that provides utility functions for checking disk problems */ public class DiskChecker { private float diskUsageThreshold; public static class DiskErrorException extends IOException { private static final long serialVersionUID = 9091606022449761729L; public DiskErrorException(String msg) { super(msg); } } public static class DiskOutOfSpaceException extends IOException { private static final long serialVersionUID = 160898797915906860L; public DiskOutOfSpaceException(String msg) { super(msg); } } public DiskChecker(float threshold) { validateThreshold(threshold); this.diskUsageThreshold = threshold; } /** * The semantics of mkdirsWithExistsCheck method is different from the * mkdirs method provided in the Sun's java.io.File class in the following * way: While creating the non-existent parent directories, this method * checks for the existence of those directories if the mkdir fails at any * point (since that directory might have just been created by some other * process). If both mkdir() and the exists() check fails for any seemingly * non-existent directory, then we signal an error; Sun's mkdir would signal * an error (return false) if a directory it is attempting to create already * exists or the mkdir fails. * * @param dir * @return true on success, false on failure */ private static boolean mkdirsWithExistsCheck(File dir) { if (dir.mkdir() || dir.exists()) { return true; } File canonDir = null; try { canonDir = dir.getCanonicalFile(); } catch (IOException e) { return false; } String parent = canonDir.getParent(); return (parent != null) && (mkdirsWithExistsCheck(new File(parent)) && (canonDir .mkdir() || canonDir.exists())); } /** * Checks the disk space available. * * @param dir * Directory to check for the disk space * @throws DiskOutOfSpaceException * Throws {@link DiskOutOfSpaceException} if available space is * less than threshhold. */ @VisibleForTesting void checkDiskFull(File dir) throws DiskOutOfSpaceException { if (null == dir) { return; } if (dir.exists()) { long usableSpace = dir.getUsableSpace(); long totalSpace = dir.getTotalSpace(); float free = (float) usableSpace / (float) totalSpace; float used = 1f - free; if (used > diskUsageThreshold) { throw new DiskOutOfSpaceException("Space left on device " + usableSpace + " < threshhold " + diskUsageThreshold); } } else { checkDiskFull(dir.getParentFile()); } } /** * Create the directory if it doesn't exist and * * @param dir * Directory to check for the disk error/full. * @throws DiskErrorException * If disk having errors * @throws DiskOutOfSpaceException * If disk is full or having less space than threshhold */ public void checkDir(File dir) throws DiskErrorException, DiskOutOfSpaceException { checkDiskFull(dir); if (!mkdirsWithExistsCheck(dir)) throw new DiskErrorException("can not create directory: " + dir.toString()); if (!dir.isDirectory()) throw new DiskErrorException("not a directory: " + dir.toString()); if (!dir.canRead()) throw new DiskErrorException("directory is not readable: " + dir.toString()); if (!dir.canWrite()) throw new DiskErrorException("directory is not writable: " + dir.toString()); } /** * Returns the disk space threshold. * * @return */ @VisibleForTesting float getDiskSpaceThreshold() { return diskUsageThreshold; } /** * Set the disk space threshold * * @param diskSpaceThreshold */ @VisibleForTesting void setDiskSpaceThreshold(float diskSpaceThreshold) { validateThreshold(diskSpaceThreshold); this.diskUsageThreshold = diskSpaceThreshold; } private void validateThreshold(float diskSpaceThreshold) { if (diskSpaceThreshold <= 0 || diskSpaceThreshold >= 1) { throw new IllegalArgumentException("Disk space threashold " + diskSpaceThreshold + " is not valid. Should be > 0 and < 1 "); } } } EntryFormatter.java000066400000000000000000000045451244507361200342530ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.util; import java.io.IOException; import org.apache.commons.configuration.Configuration; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Formatter to format an entry */ public abstract class EntryFormatter { static Logger LOG = LoggerFactory.getLogger(EntryFormatter.class); protected Configuration conf; public void setConf(Configuration conf) { this.conf = conf; } /** * Format an entry into a readable format * * @param data * Data Payload */ public abstract void formatEntry(byte[] data); /** * Format an entry from a string into a readable format * * @param input * Input Stream */ public abstract void formatEntry(java.io.InputStream input); public final static EntryFormatter STRING_FORMATTER = new StringEntryFormatter(); public static EntryFormatter newEntryFormatter(Configuration conf, String clsProperty) { String cls = conf.getString(clsProperty, StringEntryFormatter.class.getName()); ClassLoader classLoader = EntryFormatter.class.getClassLoader(); EntryFormatter formatter; try { Class aCls = classLoader.loadClass(cls); formatter = (EntryFormatter) aCls.newInstance(); formatter.setConf(conf); } catch (Exception e) { LOG.warn("No formatter class found : " + cls, e); LOG.warn("Using Default String Formatter."); formatter = STRING_FORMATTER; } return formatter; } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/HardLink.java000066400000000000000000000606511244507361200330410ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /* Copied wholesale from hadoop-common 0.23.1 package org.apache.hadoop.fs; */ package org.apache.bookkeeper.util; import java.io.BufferedReader; import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStreamReader; import java.util.Arrays; /** * Class for creating hardlinks. * Supports Unix/Linux, WinXP/2003/Vista via Cygwin, and Mac OS X. * * The HardLink class was formerly a static inner class of FSUtil, * and the methods provided were blatantly non-thread-safe. * To enable volume-parallel Update snapshots, we now provide static * threadsafe methods that allocate new buffer string arrays * upon each call. We also provide an API to hardlink all files in a * directory with a single command, which is up to 128 times more * efficient - and minimizes the impact of the extra buffer creations. */ public class HardLink { public enum OSType { OS_TYPE_UNIX, OS_TYPE_WINXP, OS_TYPE_SOLARIS, OS_TYPE_MAC } public static final OSType osType; private static HardLinkCommandGetter getHardLinkCommand; public final LinkStats linkStats; //not static //initialize the command "getters" statically, so can use their //methods without instantiating the HardLink object static { osType = getOSType(); if (osType == OSType.OS_TYPE_WINXP) { // Windows getHardLinkCommand = new HardLinkCGWin(); } else { // Unix getHardLinkCommand = new HardLinkCGUnix(); //override getLinkCountCommand for the particular Unix variant //Linux is already set as the default - {"stat","-c%h", null} if (osType == OSType.OS_TYPE_MAC) { String[] linkCountCmdTemplate = {"stat","-f%l", null}; HardLinkCGUnix.setLinkCountCmdTemplate(linkCountCmdTemplate); } else if (osType == OSType.OS_TYPE_SOLARIS) { String[] linkCountCmdTemplate = {"ls","-l", null}; HardLinkCGUnix.setLinkCountCmdTemplate(linkCountCmdTemplate); } } } public HardLink() { linkStats = new LinkStats(); } static private OSType getOSType() { String osName = System.getProperty("os.name"); if (osName.contains("Windows") && (osName.contains("XP") || osName.contains("2003") || osName.contains("Vista") || osName.contains("Windows_7") || osName.contains("Windows 7") || osName.contains("Windows7"))) { return OSType.OS_TYPE_WINXP; } else if (osName.contains("SunOS") || osName.contains("Solaris")) { return OSType.OS_TYPE_SOLARIS; } else if (osName.contains("Mac")) { return OSType.OS_TYPE_MAC; } else { return OSType.OS_TYPE_UNIX; } } /** * This abstract class bridges the OS-dependent implementations of the * needed functionality for creating hardlinks and querying link counts. * The particular implementation class is chosen during * static initialization phase of the HardLink class. * The "getter" methods construct shell command strings for various purposes. */ private static abstract class HardLinkCommandGetter { /** * Get the command string needed to hardlink a bunch of files from * a single source directory into a target directory. The source directory * is not specified here, but the command will be executed using the source * directory as the "current working directory" of the shell invocation. * * @param fileBaseNames - array of path-less file names, relative * to the source directory * @param linkDir - target directory where the hardlinks will be put * @return - an array of Strings suitable for use as a single shell command * with {@link Runtime.exec()} * @throws IOException - if any of the file or path names misbehave */ abstract String[] linkMult(String[] fileBaseNames, File linkDir) throws IOException; /** * Get the command string needed to hardlink a single file */ abstract String[] linkOne(File file, File linkName) throws IOException; /** * Get the command string to query the hardlink count of a file */ abstract String[] linkCount(File file) throws IOException; /** * Calculate the total string length of the shell command * resulting from execution of linkMult, plus the length of the * source directory name (which will also be provided to the shell) * * @param fileDir - source directory, parent of fileBaseNames * @param fileBaseNames - array of path-less file names, relative * to the source directory * @param linkDir - target directory where the hardlinks will be put * @return - total data length (must not exceed maxAllowedCmdArgLength) * @throws IOException */ abstract int getLinkMultArgLength( File fileDir, String[] fileBaseNames, File linkDir) throws IOException; /** * Get the maximum allowed string length of a shell command on this OS, * which is just the documented minimum guaranteed supported command * length - aprx. 32KB for Unix, and 8KB for Windows. */ abstract int getMaxAllowedCmdArgLength(); } /** * Implementation of HardLinkCommandGetter class for Unix */ static class HardLinkCGUnix extends HardLinkCommandGetter { private static String[] hardLinkCommand = {"ln", null, null}; private static String[] hardLinkMultPrefix = {"ln"}; private static String[] hardLinkMultSuffix = {null}; private static String[] getLinkCountCommand = {"stat","-c%h", null}; //Unix guarantees at least 32K bytes cmd length. //Subtract another 64b to allow for Java 'exec' overhead private static final int maxAllowedCmdArgLength = 32*1024 - 65; private static synchronized void setLinkCountCmdTemplate(String[] template) { //May update this for specific unix variants, //after static initialization phase getLinkCountCommand = template; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#linkOne(java.io.File, java.io.File) */ @Override String[] linkOne(File file, File linkName) throws IOException { String[] buf = new String[hardLinkCommand.length]; System.arraycopy(hardLinkCommand, 0, buf, 0, hardLinkCommand.length); //unix wants argument order: "ln " buf[1] = makeShellPath(file); buf[2] = makeShellPath(linkName); return buf; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#linkMult(java.lang.String[], java.io.File) */ @Override String[] linkMult(String[] fileBaseNames, File linkDir) throws IOException { String[] buf = new String[fileBaseNames.length + hardLinkMultPrefix.length + hardLinkMultSuffix.length]; int mark=0; System.arraycopy(hardLinkMultPrefix, 0, buf, mark, hardLinkMultPrefix.length); mark += hardLinkMultPrefix.length; System.arraycopy(fileBaseNames, 0, buf, mark, fileBaseNames.length); mark += fileBaseNames.length; buf[mark] = makeShellPath(linkDir); return buf; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#linkCount(java.io.File) */ @Override String[] linkCount(File file) throws IOException { String[] buf = new String[getLinkCountCommand.length]; System.arraycopy(getLinkCountCommand, 0, buf, 0, getLinkCountCommand.length); buf[getLinkCountCommand.length - 1] = makeShellPath(file); return buf; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#getLinkMultArgLength(java.io.File, java.lang.String[], java.io.File) */ @Override int getLinkMultArgLength(File fileDir, String[] fileBaseNames, File linkDir) throws IOException{ int sum = 0; for (String x : fileBaseNames) { // add 1 to account for terminal null or delimiter space sum += 1 + ((x == null) ? 0 : x.length()); } sum += 2 + makeShellPath(fileDir).length() + makeShellPath(linkDir).length(); //add the fixed overhead of the hardLinkMult prefix and suffix sum += 3; //length("ln") + 1 return sum; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#getMaxAllowedCmdArgLength() */ @Override int getMaxAllowedCmdArgLength() { return maxAllowedCmdArgLength; } } /** * Implementation of HardLinkCommandGetter class for Windows * * Note that the linkCount shell command for Windows is actually * a Cygwin shell command, and depends on ${cygwin}/bin * being in the Windows PATH environment variable, so * stat.exe can be found. */ static class HardLinkCGWin extends HardLinkCommandGetter { //The Windows command getter impl class and its member fields are //package-private ("default") access instead of "private" to assist //unit testing (sort of) on non-Win servers static String[] hardLinkCommand = { "fsutil","hardlink","create", null, null}; static String[] hardLinkMultPrefix = { "cmd","/q","/c","for", "%f", "in", "("}; static String hardLinkMultDir = "\\%f"; static String[] hardLinkMultSuffix = { ")", "do", "fsutil", "hardlink", "create", null, "%f", "1>NUL"}; static String[] getLinkCountCommand = {"stat","-c%h", null}; //Windows guarantees only 8K - 1 bytes cmd length. //Subtract another 64b to allow for Java 'exec' overhead static final int maxAllowedCmdArgLength = 8*1024 - 65; /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#linkOne(java.io.File, java.io.File) */ @Override String[] linkOne(File file, File linkName) throws IOException { String[] buf = new String[hardLinkCommand.length]; System.arraycopy(hardLinkCommand, 0, buf, 0, hardLinkCommand.length); //windows wants argument order: "create " buf[4] = file.getCanonicalPath(); buf[3] = linkName.getCanonicalPath(); return buf; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#linkMult(java.lang.String[], java.io.File) */ @Override String[] linkMult(String[] fileBaseNames, File linkDir) throws IOException { String[] buf = new String[fileBaseNames.length + hardLinkMultPrefix.length + hardLinkMultSuffix.length]; String td = linkDir.getCanonicalPath() + hardLinkMultDir; int mark=0; System.arraycopy(hardLinkMultPrefix, 0, buf, mark, hardLinkMultPrefix.length); mark += hardLinkMultPrefix.length; System.arraycopy(fileBaseNames, 0, buf, mark, fileBaseNames.length); mark += fileBaseNames.length; System.arraycopy(hardLinkMultSuffix, 0, buf, mark, hardLinkMultSuffix.length); mark += hardLinkMultSuffix.length; buf[mark - 3] = td; return buf; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#linkCount(java.io.File) */ @Override String[] linkCount(File file) throws IOException { String[] buf = new String[getLinkCountCommand.length]; System.arraycopy(getLinkCountCommand, 0, buf, 0, getLinkCountCommand.length); //The linkCount command is actually a Cygwin shell command, //not a Windows shell command, so we should use "makeShellPath()" //instead of "getCanonicalPath()". However, that causes another //shell exec to "cygpath.exe", and "stat.exe" actually can handle //DOS-style paths (it just prints a couple hundred bytes of warning //to stderr), so we use the more efficient "getCanonicalPath()". buf[getLinkCountCommand.length - 1] = file.getCanonicalPath(); return buf; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#getLinkMultArgLength(java.io.File, java.lang.String[], java.io.File) */ @Override int getLinkMultArgLength(File fileDir, String[] fileBaseNames, File linkDir) throws IOException { int sum = 0; for (String x : fileBaseNames) { // add 1 to account for terminal null or delimiter space sum += 1 + ((x == null) ? 0 : x.length()); } sum += 2 + fileDir.getCanonicalPath().length() + linkDir.getCanonicalPath().length(); //add the fixed overhead of the hardLinkMult command //(prefix, suffix, and Dir suffix) sum += ("cmd.exe /q /c for %f in ( ) do " + "fsutil hardlink create \\%f %f 1>NUL ").length(); return sum; } /* * @see org.apache.hadoop.fs.HardLink.HardLinkCommandGetter#getMaxAllowedCmdArgLength() */ @Override int getMaxAllowedCmdArgLength() { return maxAllowedCmdArgLength; } } /** * Calculate the nominal length of all contributors to the total * commandstring length, including fixed overhead of the OS-dependent * command. It's protected rather than private, to assist unit testing, * but real clients are not expected to need it -- see the way * createHardLinkMult() uses it internally so the user doesn't need to worry * about it. * * @param fileDir - source directory, parent of fileBaseNames * @param fileBaseNames - array of path-less file names, relative * to the source directory * @param linkDir - target directory where the hardlinks will be put * @return - total data length (must not exceed maxAllowedCmdArgLength) * @throws IOException */ protected static int getLinkMultArgLength( File fileDir, String[] fileBaseNames, File linkDir) throws IOException { return getHardLinkCommand.getLinkMultArgLength(fileDir, fileBaseNames, linkDir); } /** * Return this private value for use by unit tests. * Shell commands are not allowed to have a total string length * exceeding this size. */ protected static int getMaxAllowedCmdArgLength() { return getHardLinkCommand.getMaxAllowedCmdArgLength(); } /* * **************************************************** * Complexity is above. User-visible functionality is below * **************************************************** */ /** * Creates a hardlink * @param file - existing source file * @param linkName - desired target link file */ public static void createHardLink(File file, File linkName) throws IOException { if (file == null) { throw new IOException( "invalid arguments to createHardLink: source file is null"); } if (linkName == null) { throw new IOException( "invalid arguments to createHardLink: link name is null"); } // construct and execute shell command String[] hardLinkCommand = getHardLinkCommand.linkOne(file, linkName); Process process = Runtime.getRuntime().exec(hardLinkCommand); try { if (process.waitFor() != 0) { String errMsg = new BufferedReader(new InputStreamReader( process.getInputStream())).readLine(); if (errMsg == null) errMsg = ""; String inpMsg = new BufferedReader(new InputStreamReader( process.getErrorStream())).readLine(); if (inpMsg == null) inpMsg = ""; throw new IOException(errMsg + inpMsg); } } catch (InterruptedException e) { throw new IOException(e); } finally { process.destroy(); } } /** * Creates hardlinks from multiple existing files within one parent * directory, into one target directory. * @param parentDir - directory containing source files * @param fileBaseNames - list of path-less file names, as returned by * parentDir.list() * @param linkDir - where the hardlinks should be put. It must already exist. * * If the list of files is too long (overflows maxAllowedCmdArgLength), * we will automatically split it into multiple invocations of the * underlying method. */ public static void createHardLinkMult(File parentDir, String[] fileBaseNames, File linkDir) throws IOException { //This is the public method all non-test clients are expected to use. //Normal case - allow up to maxAllowedCmdArgLength characters in the cmd createHardLinkMult(parentDir, fileBaseNames, linkDir, getHardLinkCommand.getMaxAllowedCmdArgLength()); } /* * Implements {@link createHardLinkMult} with added variable "maxLength", * to ease unit testing of the auto-splitting feature for long lists. * Likewise why it returns "callCount", the number of sub-arrays that * the file list had to be split into. * Non-test clients are expected to call the public method instead. */ protected static int createHardLinkMult(File parentDir, String[] fileBaseNames, File linkDir, int maxLength) throws IOException { if (parentDir == null) { throw new IOException( "invalid arguments to createHardLinkMult: parent directory is null"); } if (linkDir == null) { throw new IOException( "invalid arguments to createHardLinkMult: link directory is null"); } if (fileBaseNames == null) { throw new IOException( "invalid arguments to createHardLinkMult: " + "filename list can be empty but not null"); } if (fileBaseNames.length == 0) { //the OS cmds can't handle empty list of filenames, //but it's legal, so just return. return 0; } if (!linkDir.exists()) { throw new FileNotFoundException(linkDir + " not found."); } //if the list is too long, split into multiple invocations int callCount = 0; if (getLinkMultArgLength(parentDir, fileBaseNames, linkDir) > maxLength && fileBaseNames.length > 1) { String[] list1 = Arrays.copyOf(fileBaseNames, fileBaseNames.length/2); callCount += createHardLinkMult(parentDir, list1, linkDir, maxLength); String[] list2 = Arrays.copyOfRange(fileBaseNames, fileBaseNames.length/2, fileBaseNames.length); callCount += createHardLinkMult(parentDir, list2, linkDir, maxLength); return callCount; } else { callCount = 1; } // construct and execute shell command String[] hardLinkCommand = getHardLinkCommand.linkMult(fileBaseNames, linkDir); Process process = Runtime.getRuntime().exec(hardLinkCommand, null, parentDir); try { if (process.waitFor() != 0) { String errMsg = new BufferedReader(new InputStreamReader( process.getInputStream())).readLine(); if (errMsg == null) errMsg = ""; String inpMsg = new BufferedReader(new InputStreamReader( process.getErrorStream())).readLine(); if (inpMsg == null) inpMsg = ""; throw new IOException(errMsg + inpMsg); } } catch (InterruptedException e) { throw new IOException(e); } finally { process.destroy(); } return callCount; } /** * Retrieves the number of links to the specified file. */ public static int getLinkCount(File fileName) throws IOException { if (fileName == null) { throw new IOException( "invalid argument to getLinkCount: file name is null"); } if (!fileName.exists()) { throw new FileNotFoundException(fileName + " not found."); } // construct and execute shell command String[] cmd = getHardLinkCommand.linkCount(fileName); String inpMsg = null; String errMsg = null; int exitValue = -1; BufferedReader in = null; BufferedReader err = null; Process process = Runtime.getRuntime().exec(cmd); try { exitValue = process.waitFor(); in = new BufferedReader(new InputStreamReader( process.getInputStream())); inpMsg = in.readLine(); err = new BufferedReader(new InputStreamReader( process.getErrorStream())); errMsg = err.readLine(); if (inpMsg == null || exitValue != 0) { throw createIOException(fileName, inpMsg, errMsg, exitValue, null); } if (osType == OSType.OS_TYPE_SOLARIS) { String[] result = inpMsg.split("\\s+"); return Integer.parseInt(result[1]); } else { return Integer.parseInt(inpMsg); } } catch (NumberFormatException e) { throw createIOException(fileName, inpMsg, errMsg, exitValue, e); } catch (InterruptedException e) { throw createIOException(fileName, inpMsg, errMsg, exitValue, e); } finally { process.destroy(); if (in != null) in.close(); if (err != null) err.close(); } } /* Create an IOException for failing to get link count. */ private static IOException createIOException(File f, String message, String error, int exitvalue, Exception cause) { final String winErrMsg = "; Windows errors in getLinkCount are often due " + "to Cygwin misconfiguration"; final String s = "Failed to get link count on file " + f + ": message=" + message + "; error=" + error + ((osType == OSType.OS_TYPE_WINXP) ? winErrMsg : "") + "; exit value=" + exitvalue; return (cause == null) ? new IOException(s) : new IOException(s, cause); } /** * HardLink statistics counters and methods. * Not multi-thread safe, obviously. * Init is called during HardLink instantiation, above. * * These are intended for use by knowledgeable clients, not internally, * because many of the internal methods are static and can't update these * per-instance counters. */ public static class LinkStats { public int countDirs = 0; public int countSingleLinks = 0; public int countMultLinks = 0; public int countFilesMultLinks = 0; public int countEmptyDirs = 0; public int countPhysicalFileCopies = 0; public void clear() { countDirs = 0; countSingleLinks = 0; countMultLinks = 0; countFilesMultLinks = 0; countEmptyDirs = 0; countPhysicalFileCopies = 0; } public String report() { return "HardLinkStats: " + countDirs + " Directories, including " + countEmptyDirs + " Empty Directories, " + countSingleLinks + " single Link operations, " + countMultLinks + " multi-Link operations, linking " + countFilesMultLinks + " files, total " + (countSingleLinks + countFilesMultLinks) + " linkable files. Also physically copied " + countPhysicalFileCopies + " other files."; } } /** * Convert a os-native filename to a path that works for the shell. * @param filename The filename to convert * @return The unix pathname * @throws IOException on windows, there can be problems with the subprocess */ public static String makeShellPath(File file) throws IOException { String filename = file.getCanonicalPath(); if (System.getProperty("os.name").startsWith("Windows")) { BufferedReader r = null; try { ProcessBuilder pb = new ProcessBuilder("cygpath", "-u", filename); Process p = pb.start(); int err = p.waitFor(); if (err != 0) { throw new IOException("Couldn't resolve path " + filename + "(" + err + ")"); } r = new BufferedReader(new InputStreamReader(p.getInputStream())); return r.readLine(); } catch (InterruptedException ie) { throw new IOException("Couldn't resolve path " + filename, ie); } finally { if (r != null) { r.close(); } } } else { return filename; } } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/IOUtils.java000066400000000000000000000056361244507361200326770ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.util; import java.io.IOException; import org.slf4j.Logger; /** * An utility class for I/O related functionality. */ public class IOUtils { /** * Close the Closeable objects and ignore any {@link IOException} or * null pointers. Must only be used for cleanup in exception handlers. * * @param log * the log to record problems to at debug level. Can be null. * @param closeables * the objects to close */ public static void close(Logger log, java.io.Closeable... closeables) { for (java.io.Closeable c : closeables) { if (c != null) { try { c.close(); } catch (IOException e) { if (log != null && log.isDebugEnabled()) { log.debug("Exception in closing " + c, e); } } } } } /** * Confirm prompt for the console operations. * * @param prompt * Prompt message to be displayed on console * @return Returns true if confirmed as 'Y', returns false if confirmed as * 'N' * @throws IOException */ public static boolean confirmPrompt(String prompt) throws IOException { while (true) { System.out.print(prompt + " (Y or N) "); StringBuilder responseBuilder = new StringBuilder(); while (true) { int c = System.in.read(); if (c == -1 || c == '\r' || c == '\n') { break; } responseBuilder.append((char) c); } String response = responseBuilder.toString(); if (response.equalsIgnoreCase("y") || response.equalsIgnoreCase("yes")) { return true; } else if (response.equalsIgnoreCase("n") || response.equalsIgnoreCase("no")) { return false; } System.out.println("Invalid input: " + response); // else ask them again } } } LocalBookKeeper.java000066400000000000000000000232111244507361200342560ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.util; import java.io.BufferedReader; import java.io.File; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStream; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.Socket; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.Watcher.Event.KeeperState; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.server.NIOServerCnxnFactory; import org.apache.zookeeper.server.ZooKeeperServer; public class LocalBookKeeper { protected static final Logger LOG = LoggerFactory.getLogger(LocalBookKeeper.class); public static final int CONNECTION_TIMEOUT = 30000; int numberOfBookies; public LocalBookKeeper() { numberOfBookies = 3; } public LocalBookKeeper(int numberOfBookies) { this(); this.numberOfBookies = numberOfBookies; LOG.info("Running " + this.numberOfBookies + " bookie(s)."); } private final String HOSTPORT = "127.0.0.1:2181"; NIOServerCnxnFactory serverFactory; ZooKeeperServer zks; ZooKeeper zkc; int ZooKeeperDefaultPort = 2181; static int zkSessionTimeOut = 5000; File ZkTmpDir; //BookKeeper variables File tmpDirs[]; BookieServer bs[]; ServerConfiguration bsConfs[]; Integer initialPort = 5000; /** * @param args */ private void runZookeeper(int maxCC) throws IOException { // create a ZooKeeper server(dataDir, dataLogDir, port) LOG.info("Starting ZK server"); //ServerStats.registerAsConcrete(); //ClientBase.setupTestEnv(); ZkTmpDir = File.createTempFile("zookeeper", "test"); if (!ZkTmpDir.delete() || !ZkTmpDir.mkdir()) { throw new IOException("Couldn't create zk directory " + ZkTmpDir); } try { zks = new ZooKeeperServer(ZkTmpDir, ZkTmpDir, ZooKeeperDefaultPort); serverFactory = new NIOServerCnxnFactory(); serverFactory.configure(new InetSocketAddress(ZooKeeperDefaultPort), maxCC); serverFactory.startup(zks); } catch (Exception e) { // TODO Auto-generated catch block LOG.error("Exception while instantiating ZooKeeper", e); } boolean b = waitForServerUp(HOSTPORT, CONNECTION_TIMEOUT); LOG.debug("ZooKeeper server up: {}", b); } private void initializeZookeper() throws IOException { LOG.info("Instantiate ZK Client"); //initialize the zk client with values try { ZKConnectionWatcher zkConnectionWatcher = new ZKConnectionWatcher(); zkc = new ZooKeeper(HOSTPORT, zkSessionTimeOut, zkConnectionWatcher); zkConnectionWatcher.waitForConnection(); zkc.create("/ledgers", new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); zkc.create("/ledgers/available", new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); // No need to create an entry for each requested bookie anymore as the // BookieServers will register themselves with ZooKeeper on startup. } catch (KeeperException e) { // TODO Auto-generated catch block LOG.error("Exception while creating znodes", e); } catch (InterruptedException e) { // TODO Auto-generated catch block LOG.error("Interrupted while creating znodes", e); } } private void runBookies(ServerConfiguration baseConf) throws IOException, KeeperException, InterruptedException, BookieException, UnavailableException, CompatibilityException { LOG.info("Starting Bookie(s)"); // Create Bookie Servers (B1, B2, B3) tmpDirs = new File[numberOfBookies]; bs = new BookieServer[numberOfBookies]; bsConfs = new ServerConfiguration[numberOfBookies]; for(int i = 0; i < numberOfBookies; i++) { tmpDirs[i] = File.createTempFile("bookie" + Integer.toString(i), "test"); if (!tmpDirs[i].delete() || !tmpDirs[i].mkdir()) { throw new IOException("Couldn't create bookie dir " + tmpDirs[i]); } bsConfs[i] = new ServerConfiguration(baseConf); // override settings bsConfs[i].setBookiePort(initialPort + i); bsConfs[i].setZkServers(InetAddress.getLocalHost().getHostAddress() + ":" + ZooKeeperDefaultPort); bsConfs[i].setJournalDirName(tmpDirs[i].getPath()); bsConfs[i].setLedgerDirNames(new String[] { tmpDirs[i].getPath() }); bsConfs[i].setAllowLoopback(true); bs[i] = new BookieServer(bsConfs[i]); bs[i].start(); } } public static void main(String[] args) throws IOException, KeeperException, InterruptedException, BookieException, UnavailableException, CompatibilityException { if(args.length < 1) { usage(); System.exit(-1); } LocalBookKeeper lb = new LocalBookKeeper(Integer.parseInt(args[0])); ServerConfiguration conf = new ServerConfiguration(); if (args.length >= 2) { String confFile = args[1]; try { conf.loadConf(new File(confFile).toURI().toURL()); LOG.info("Using configuration file " + confFile); } catch (Exception e) { // load conf failed LOG.warn("Error loading configuration file " + confFile, e); } } lb.runZookeeper(1000); lb.initializeZookeper(); lb.runBookies(conf); while (true) { Thread.sleep(5000); } } private static void usage() { System.err.println("Usage: LocalBookKeeper number-of-bookies"); } /* Watching SyncConnected event from ZooKeeper */ static class ZKConnectionWatcher implements Watcher { private CountDownLatch clientConnectLatch = new CountDownLatch(1); @Override public void process(WatchedEvent event) { if (event.getState() == KeeperState.SyncConnected) { clientConnectLatch.countDown(); } } // Waiting for the SyncConnected event from the ZooKeeper server public void waitForConnection() throws IOException { try { if (!clientConnectLatch.await(zkSessionTimeOut, TimeUnit.MILLISECONDS)) { throw new IOException( "Couldn't connect to zookeeper server"); } } catch (InterruptedException e) { throw new IOException( "Interrupted when connecting to zookeeper server", e); } } } public static boolean waitForServerUp(String hp, long timeout) { long start = MathUtils.now(); String split[] = hp.split(":"); String host = split[0]; int port = Integer.parseInt(split[1]); while (true) { try { Socket sock = new Socket(host, port); BufferedReader reader = null; try { OutputStream outstream = sock.getOutputStream(); outstream.write("stat".getBytes()); outstream.flush(); reader = new BufferedReader( new InputStreamReader(sock.getInputStream())); String line = reader.readLine(); if (line != null && line.startsWith("Zookeeper version:")) { LOG.info("Server UP"); return true; } } finally { sock.close(); if (reader != null) { reader.close(); } } } catch (IOException e) { // ignore as this is expected LOG.info("server " + hp + " not up " + e); } if (MathUtils.now() > start + timeout) { break; } try { Thread.sleep(250); } catch (InterruptedException e) { // ignore } } return false; } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/Main.java000066400000000000000000000032101244507361200322150ustar00rootroot00000000000000package org.apache.bookkeeper.util; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.IOException; import org.apache.bookkeeper.proto.BookieClient; import org.apache.bookkeeper.proto.BookieServer; public class Main { static void usage() { System.err.println("USAGE: bookeeper client|bookie"); } /** * @param args * @throws InterruptedException * @throws IOException */ public static void main(String[] args) throws Exception { if (args.length < 1 || !(args[0].equals("client") || args[0].equals("bookie"))) { usage(); return; } String newArgs[] = new String[args.length - 1]; System.arraycopy(args, 1, newArgs, 0, newArgs.length); if (args[0].equals("bookie")) { BookieServer.main(newArgs); } else { BookieClient.main(newArgs); } } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/MathUtils.java000066400000000000000000000053111244507361200332470ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.util; /** * Provides misc math functions that don't come standard */ public class MathUtils { private static final long NANOSECONDS_PER_MILLISECOND = 1000000; public static int signSafeMod(long dividend, int divisor) { int mod = (int) (dividend % divisor); if (mod < 0) { mod += divisor; } return mod; } /** * Current time from some arbitrary time base in the past, counting in * milliseconds, and not affected by settimeofday or similar system clock * changes. This is appropriate to use when computing how much longer to * wait for an interval to expire. * * NOTE: only use it for measuring. * http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime%28%29 * * @return current time in milliseconds. */ public static long now() { return System.nanoTime() / NANOSECONDS_PER_MILLISECOND; } /** * Current time from some arbitrary time base in the past, counting in * nanoseconds, and not affected by settimeofday or similar system clock * changes. This is appropriate to use when computing how much longer to * wait for an interval to expire. * * NOTE: only use it for measuring. * http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime%28%29 * * @return current time in nanoseconds. */ public static long nowInNano() { return System.nanoTime(); } /** * Milliseconds elapsed since the time specified, the input is nanoTime * the only conversion happens when computing the elapsed time * * @param startNanoTime the start of the interval that we are measuring * @return elapsed time in milliseconds. */ public static long elapsedMSec (long startNanoTime) { return (System.nanoTime() - startNanoTime)/ NANOSECONDS_PER_MILLISECOND; } } OrderedSafeExecutor.java000066400000000000000000000111461244507361200351630ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/utilpackage org.apache.bookkeeper.util; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.util.Random; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; /** * This class provides 2 things over the java {@link ScheduledExecutorService}. * * 1. It takes {@link SafeRunnable objects} instead of plain Runnable objects. * This means that exceptions in scheduled tasks wont go unnoticed and will be * logged. * * 2. It supports submitting tasks with an ordering key, so that tasks submitted * with the same key will always be executed in order, but tasks across * different keys can be unordered. This retains parallelism while retaining the * basic amount of ordering we want (e.g. , per ledger handle). Ordering is * achieved by hashing the key objects to threads by their {@link #hashCode()} * method. * */ public class OrderedSafeExecutor { ExecutorService threads[]; Random rand = new Random(); public OrderedSafeExecutor(int numThreads) { if (numThreads <= 0) { throw new IllegalArgumentException(); } threads = new ExecutorService[numThreads]; for (int i = 0; i < numThreads; i++) { threads[i] = Executors.newSingleThreadExecutor(); } } ExecutorService chooseThread() { // skip random # generation in this special case if (threads.length == 1) { return threads[0]; } return threads[rand.nextInt(threads.length)]; } ExecutorService chooseThread(Object orderingKey) { // skip hashcode generation in this special case if (threads.length == 1) { return threads[0]; } return threads[MathUtils.signSafeMod(orderingKey.hashCode(), threads.length)]; } /** * schedules a one time action to execute */ public void submit(SafeRunnable r) { chooseThread().submit(r); } /** * schedules a one time action to execute with an ordering guarantee on the key * @param orderingKey * @param r */ public void submitOrdered(Object orderingKey, SafeRunnable r) { chooseThread(orderingKey).submit(r); } public void shutdown() { for (int i = 0; i < threads.length; i++) { threads[i].shutdown(); } } public boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException { boolean ret = true; for (int i = 0; i < threads.length; i++) { ret = ret && threads[i].awaitTermination(timeout, unit); } return ret; } /** * Generic callback implementation which will run the * callback in the thread which matches the ordering key */ public static abstract class OrderedSafeGenericCallback implements GenericCallback { private final OrderedSafeExecutor executor; private final Object orderingKey; /** * @param executor The executor on which to run the callback * @param orderingKey Key used to decide which thread the callback * should run on. */ public OrderedSafeGenericCallback(OrderedSafeExecutor executor, Object orderingKey) { this.executor = executor; this.orderingKey = orderingKey; } @Override public final void operationComplete(final int rc, final T result) { executor.submitOrdered(orderingKey, new SafeRunnable() { @Override public void safeRun() { safeOperationComplete(rc, result); } }); } public abstract void safeOperationComplete(int rc, T result); } } ReflectionUtils.java000066400000000000000000000125231244507361200343740ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.util; import java.lang.reflect.Constructor; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; /** * General Class Reflection Utils */ public class ReflectionUtils { private static final Map, Constructor> constructorCache = new ConcurrentHashMap, Constructor>(); /** * Get the value of the name property as a Class. * If no such property is specified, then defaultCls is returned. * * @param conf * Configuration Object. * @param name * Class Property Name. * @param defaultCls * Default Class to be returned. * @param classLoader * Class Loader to load class. * @return property value as a Class, or defaultCls. * @throws ConfigurationException */ public static Class getClass(Configuration conf, String name, Class defaultCls, ClassLoader classLoader) throws ConfigurationException { String valueStr = conf.getString(name); if (null == valueStr) { return defaultCls; } try { return Class.forName(valueStr, true, classLoader); } catch (ClassNotFoundException cnfe) { throw new ConfigurationException(cnfe); } } /** * Get the value of the name property as a Class implementing * the interface specified by xface. * * If no such property is specified, then defaultValue is returned. * * An exception is thrown if the returned class does not implement the named interface. * * @param conf * Configuration Object. * @param name * Class Property Name. * @param defaultValue * Default Class to be returned. * @param xface * The interface implemented by the named class. * @param classLoader * Class Loader to load class. * @return property value as a Class, or defaultValue. * @throws ConfigurationException */ public static Class getClass(Configuration conf, String name, Class defaultValue, Class xface, ClassLoader classLoader) throws ConfigurationException { try { Class theCls = getClass(conf, name, defaultValue, classLoader); if (null != theCls && !xface.isAssignableFrom(theCls)) { throw new ConfigurationException(theCls + " not " + xface.getName()); } else if (null != theCls) { return theCls.asSubclass(xface); } else { return null; } } catch (Exception e) { throw new ConfigurationException(e); } } /** * Create an object for the given class. * * @param theCls * class of which an object is created. * @return a new object */ @SuppressWarnings("unchecked") public static T newInstance(Class theCls) { T result; try { Constructor meth = (Constructor) constructorCache.get(theCls); if (null == meth) { meth = theCls.getDeclaredConstructor(); meth.setAccessible(true); constructorCache.put(theCls, meth); } result = meth.newInstance(); } catch (Exception e) { throw new RuntimeException(e); } return result; } /** * Create an object using the given class name. * * @param clsName * class name of which an object is created. * @param xface * The interface implemented by the named class. * @return a new object */ @SuppressWarnings("unchecked") public static T newInstance(String clsName, Class xface) { Class theCls; try { theCls = Class.forName(clsName); } catch (ClassNotFoundException cnfe) { throw new RuntimeException(cnfe); } if (!xface.isAssignableFrom(theCls)) { throw new RuntimeException(clsName + " not " + xface.getName()); } return newInstance(theCls.asSubclass(xface)); } } SafeRunnable.java000066400000000000000000000023571244507361200336320ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/utilpackage org.apache.bookkeeper.util; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import org.slf4j.Logger; import org.slf4j.LoggerFactory; public abstract class SafeRunnable implements Runnable { static final Logger logger = LoggerFactory.getLogger(SafeRunnable.class); @Override public void run() { try { safeRun(); } catch(Throwable t) { logger.error("Unexpected throwable caught ", t); } } public abstract void safeRun(); } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/SnapshotMap.java000066400000000000000000000077241244507361200336040ustar00rootroot00000000000000package org.apache.bookkeeper.util; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.util.concurrent.ConcurrentSkipListMap; import java.util.concurrent.ConcurrentHashMap; import java.util.Map; import java.util.NavigableMap; import java.util.concurrent.locks.ReentrantReadWriteLock; /** * A snapshotable map. */ public class SnapshotMap { // stores recent updates volatile Map updates; volatile Map updatesToMerge; // map stores all snapshot data volatile NavigableMap snapshot; final ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); public SnapshotMap() { updates = new ConcurrentHashMap(); updatesToMerge = new ConcurrentHashMap(); snapshot = new ConcurrentSkipListMap(); } /** * Create a snapshot of current map. * * @return a snapshot of current map. */ public NavigableMap snapshot() { this.lock.writeLock().lock(); try { if (updates.isEmpty()) { return snapshot; } // put updates for merge to snapshot updatesToMerge = updates; updates = new ConcurrentHashMap(); } finally { this.lock.writeLock().unlock(); } // merging the updates to snapshot for (Map.Entry entry : updatesToMerge.entrySet()) { snapshot.put(entry.getKey(), entry.getValue()); } // clear updatesToMerge this.lock.writeLock().lock(); try { updatesToMerge = new ConcurrentHashMap(); } finally { this.lock.writeLock().unlock(); } return snapshot; } /** * Associates the specified value with the specified key in this map. * * @param key * Key with which the specified value is to be associated. * @param value * Value to be associated with the specified key. */ public void put(K key, V value) { this.lock.readLock().lock(); try { updates.put(key, value); } finally { this.lock.readLock().unlock(); } } /** * Removes the mapping for the key from this map if it is present. * * @param key * Key whose mapping is to be removed from this map. */ public void remove(K key) { this.lock.readLock().lock(); try { // first remove updates updates.remove(key); updatesToMerge.remove(key); // then remove snapshot snapshot.remove(key); } finally { this.lock.readLock().unlock(); } } /** * Returns true if this map contains a mapping for the specified key. * * @param key * Key whose presence is in the map to be tested. * @return true if the map contains a mapping for the specified key. */ public boolean containsKey(K key) { this.lock.readLock().lock(); try { return updates.containsKey(key) | updatesToMerge.containsKey(key) | snapshot.containsKey(key); } finally { this.lock.readLock().unlock(); } } } StringEntryFormatter.java000066400000000000000000000027611244507361200354400ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.util; import java.io.IOException; import org.apache.commons.configuration.Configuration; import com.google.protobuf.ByteString; public class StringEntryFormatter extends EntryFormatter { @Override public void formatEntry(byte[] data) { System.out.println(ByteString.copyFrom(data).toStringUtf8()); } @Override public void formatEntry(java.io.InputStream input) { try { byte[] data = new byte[input.available()]; input.read(data, 0, data.length); formatEntry(data); } catch (IOException ie) { System.out.println("Warn: Unreadable entry : " + ie.getMessage()); } } }; bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/StringUtils.java000066400000000000000000000077101244507361200336310ustar00rootroot00000000000000package org.apache.bookkeeper.util; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.IOException; import java.net.InetSocketAddress; /** * Provided utilites for parsing network addresses, ledger-id from node paths * etc. * */ public class StringUtils { // Ledger Node Prefix static public final String LEDGER_NODE_PREFIX = "L"; /** * Parses address into IP and port. * * @param addr * String */ public static InetSocketAddress parseAddr(String s) throws IOException { String parts[] = s.split(":"); if (parts.length != 2) { throw new IOException(s + " does not have the form host:port"); } int port; try { port = Integer.parseInt(parts[1]); } catch (NumberFormatException e) { throw new IOException(s + " does not have the form host:port"); } InetSocketAddress addr = new InetSocketAddress(parts[0], port); return addr; } public static String addrToString(InetSocketAddress addr) { return addr.getAddress().getHostAddress() + ":" + addr.getPort(); } /** * Formats ledger ID according to ZooKeeper rules * * @param id * znode id */ public static String getZKStringId(long id) { return String.format("%010d", id); } /** * Get the hierarchical ledger path according to the ledger id * * @param ledgerId * ledger id * @return the hierarchical path */ public static String getHierarchicalLedgerPath(long ledgerId) { String ledgerIdStr = getZKStringId(ledgerId); // do 2-4-4 split StringBuilder sb = new StringBuilder(); sb.append("/") .append(ledgerIdStr.substring(0, 2)).append("/") .append(ledgerIdStr.substring(2, 6)).append("/") .append(LEDGER_NODE_PREFIX) .append(ledgerIdStr.substring(6, 10)); return sb.toString(); } /** * Parse the hierarchical ledger path to its ledger id * * @param hierarchicalLedgerPath * @return the ledger id * @throws IOException */ public static long stringToHierarchicalLedgerId(String hierarchicalLedgerPath) throws IOException { String[] hierarchicalParts = hierarchicalLedgerPath.split("/"); if (hierarchicalParts.length != 3) { throw new IOException("it is not a valid hierarchical path name : " + hierarchicalLedgerPath); } hierarchicalParts[2] = hierarchicalParts[2].substring(LEDGER_NODE_PREFIX.length()); return stringToHierarchicalLedgerId(hierarchicalParts); } /** * Get ledger id * * @param levelNodes * level of the ledger path * @return ledger id * @throws IOException */ public static long stringToHierarchicalLedgerId(String...levelNodes) throws IOException { try { StringBuilder sb = new StringBuilder(); for (String node : levelNodes) { sb.append(node); } return Long.parseLong(sb.toString()); } catch (NumberFormatException e) { throw new IOException(e); } } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/Tool.java000066400000000000000000000026271244507361200322610ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.util; import org.apache.commons.configuration.Configuration; /** * A tool interface that supports handling of generic command-line options. */ public interface Tool { /** * Exectue the command with given arguments * * @param args command specific arguments * @return exit code. * @throws Exception */ public int run(String[] args) throws Exception; /** * Passe a configuration object to the tool. * * @param conf configuration object * @throws Exception */ public void setConf(Configuration conf) throws Exception; } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/util/ZkUtils.java000066400000000000000000000232731244507361200327510ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.util; import java.io.File; import java.io.IOException; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.AsyncCallback.StringCallback; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.data.ACL; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Provided utilites for zookeeper access, etc. */ public class ZkUtils { private static final Logger LOG = LoggerFactory.getLogger(ZkUtils.class); /** * Asynchronously create zookeeper path recursively and optimistically. * * @see #createFullPathOptimistic(ZooKeeper,String,byte[],List,CreateMode) * * @param zk * Zookeeper client * @param originalPath * Zookeeper full path * @param data * Zookeeper data * @param acl * Acl of the zk path * @param createMode * Create mode of zk path * @param callback * Callback * @param ctx * Context object */ public static void asyncCreateFullPathOptimistic( final ZooKeeper zk, final String originalPath, final byte[] data, final List acl, final CreateMode createMode, final AsyncCallback.StringCallback callback, final Object ctx) { zk.create(originalPath, data, acl, createMode, new StringCallback() { @Override public void processResult(int rc, String path, Object ctx, String name) { if (rc != Code.NONODE.intValue()) { callback.processResult(rc, path, ctx, name); return; } // Since I got a nonode, it means that my parents don't exist // create mode is persistent since ephemeral nodes can't be // parents String parent = new File(originalPath).getParent().replace("\\", "/"); asyncCreateFullPathOptimistic(zk, parent, new byte[0], acl, CreateMode.PERSISTENT, new StringCallback() { @Override public void processResult(int rc, String path, Object ctx, String name) { if (rc == Code.OK.intValue() || rc == Code.NODEEXISTS.intValue()) { // succeeded in creating the parent, now // create the original path asyncCreateFullPathOptimistic(zk, originalPath, data, acl, createMode, callback, ctx); } else { callback.processResult(rc, path, ctx, name); } } }, ctx); } }, ctx); } /** * Create zookeeper path recursively and optimistically. This method can throw * any of the KeeperExceptions which can be thrown by ZooKeeper#create. * KeeperException.NodeExistsException will only be thrown if the full path specified * by _path_ already exists. The existence of any parent znodes is not an error * condition. * * @param zkc * - ZK instance * @param path * - znode path * @param data * - znode data * @param acl * - Acl of the zk path * @param createMode * - Create mode of zk path * @throws KeeperException * if the server returns a non-zero error code, or invalid ACL * @throws InterruptedException * if the transaction is interrupted */ public static void createFullPathOptimistic(ZooKeeper zkc, String path, byte[] data, final List acl, final CreateMode createMode) throws KeeperException, InterruptedException { final CountDownLatch latch = new CountDownLatch(1); final AtomicInteger rc = new AtomicInteger(Code.OK.intValue()); asyncCreateFullPathOptimistic(zkc, path, data, acl, createMode, new StringCallback() { @Override public void processResult(int rc2, String path, Object ctx, String name) { rc.set(rc2); latch.countDown(); } }, null); latch.await(); if (rc.get() != Code.OK.intValue()) { throw KeeperException.create(Code.get(rc.get())); } } private static class GetChildrenCtx { int rc; boolean done = false; List children = null; } /** * Sync get all children under single zk node * * @param zk * zookeeper client * @param node * node path * @return direct children * @throws InterruptedException * @throws IOException */ public static List getChildrenInSingleNode(final ZooKeeper zk, final String node) throws InterruptedException, IOException { final GetChildrenCtx ctx = new GetChildrenCtx(); getChildrenInSingleNode(zk, node, new GenericCallback>() { @Override public void operationComplete(int rc, List ledgers) { synchronized (ctx) { if (Code.OK.intValue() == rc) { ctx.children = ledgers; } ctx.rc = rc; ctx.done = true; ctx.notifyAll(); } } }); synchronized (ctx) { while (ctx.done == false) { ctx.wait(); } } if (Code.OK.intValue() != ctx.rc) { throw new IOException("Error on getting children from node " + node); } return ctx.children; } /** * Async get direct children under single node * * @param zk * zookeeper client * @param node * node path * @param cb * callback function */ public static void getChildrenInSingleNode(final ZooKeeper zk, final String node, final GenericCallback> cb) { zk.sync(node, new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (rc != Code.OK.intValue()) { LOG.error("ZK error syncing nodes when getting children: ", KeeperException .create(KeeperException.Code.get(rc), path)); cb.operationComplete(rc, null); return; } zk.getChildren(node, false, new AsyncCallback.ChildrenCallback() { @Override public void processResult(int rc, String path, Object ctx, List nodes) { if (rc != Code.OK.intValue()) { LOG.error("Error polling ZK for the available nodes: ", KeeperException .create(KeeperException.Code.get(rc), path)); cb.operationComplete(rc, null); return; } cb.operationComplete(rc, nodes); } }, null); } }, null); } /** * Get new ZooKeeper client. Waits till the connection is complete. If * connection is not successful within timeout, then throws back exception. * * @param servers * ZK servers connection string. * @param timeout * Session timeout. */ public static ZooKeeper createConnectedZookeeperClient(String servers, ZooKeeperWatcherBase w) throws IOException, InterruptedException, KeeperException { if (servers == null || servers.isEmpty()) { throw new IllegalArgumentException("servers cannot be empty"); } final ZooKeeper newZk = new ZooKeeper(servers, w.getZkSessionTimeOut(), w); w.waitForConnection(); // Re-checking zookeeper connection status if (!newZk.getState().isConnected()) { throw KeeperException.create(KeeperException.Code.CONNECTIONLOSS); } return newZk; } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/versioning/000077500000000000000000000000001244507361200317005ustar00rootroot00000000000000Version.java000066400000000000000000000035461244507361200341210ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/versioning/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.versioning; /** * An interface that allows us to determine if a given version happened before or after another version. */ public interface Version { /** * Initial version. */ public static final Version NEW = new Version() { @Override public Occurred compare(Version v) { if (null == v) { throw new NullPointerException("Version is not allowed to be null."); } if (this == v) { return Occurred.CONCURRENTLY; } return Occurred.BEFORE; } }; /** * Match any version. */ public static final Version ANY = new Version() { @Override public Occurred compare(Version v) { if (null == v) { throw new NullPointerException("Version is not allowed to be null."); } return Occurred.CONCURRENTLY; } }; public static enum Occurred { BEFORE, AFTER, CONCURRENTLY } public Occurred compare(Version v); } Versioned.java000066400000000000000000000024331244507361200344240ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/versioning/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.versioning; public class Versioned { T value; Version version; public Versioned(T value, Version version) { this.value = value; this.version = version; } public void setValue(T value) { this.value = value; } public T getValue() { return value; } public void setVersion(Version version) { this.version = version; } public Version getVersion() { return version; } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/zookeeper/000077500000000000000000000000001244507361200315205ustar00rootroot00000000000000ZooKeeperWatcherBase.java000066400000000000000000000062161244507361200363250ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/java/org/apache/bookkeeper/zookeeper/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.zookeeper; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.Watcher.Event.EventType; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Watcher for receiving zookeeper server connection events. */ public class ZooKeeperWatcherBase implements Watcher { private static final Logger LOG = LoggerFactory .getLogger(ZooKeeperWatcherBase.class); private final int zkSessionTimeOut; private CountDownLatch clientConnectLatch = new CountDownLatch(1); public ZooKeeperWatcherBase(int zkSessionTimeOut) { this.zkSessionTimeOut = zkSessionTimeOut; } @Override public void process(WatchedEvent event) { // If event type is NONE, this is a connection status change if (event.getType() != EventType.None) { LOG.debug("Recieved event: {}, path: {} from ZooKeeper server", event.getType(), event.getPath()); return; } LOG.debug("Recieved {} from ZooKeeper server", event.getState()); // TODO: Needs to handle AuthFailed, SaslAuthenticated events switch (event.getState()) { case SyncConnected: clientConnectLatch.countDown(); break; case Disconnected: LOG.debug("Ignoring Disconnected event from ZooKeeper server"); break; case Expired: LOG.error("ZooKeeper client connection to the " + "ZooKeeper server has expired!"); break; } } /** * Waiting for the SyncConnected event from the ZooKeeper server * * @throws KeeperException * when there is no connection * @throws InterruptedException * interrupted while waiting for connection */ public void waitForConnection() throws KeeperException, InterruptedException { if (!clientConnectLatch.await(zkSessionTimeOut, TimeUnit.MILLISECONDS)) { throw KeeperException.create(KeeperException.Code.CONNECTIONLOSS); } } /** * Return zookeeper session time out */ public int getZkSessionTimeOut() { return zkSessionTimeOut; } } bookkeeper-release-4.2.4/bookkeeper-server/src/main/proto/000077500000000000000000000000001244507361200236015ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/proto/DataFormats.proto000066400000000000000000000042211244507361200270720ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ option java_package = "org.apache.bookkeeper.proto"; option optimize_for = SPEED; /** * Metadata format for storing ledger information */ message LedgerMetadataFormat { required int32 quorumSize = 1; required int32 ensembleSize = 2; required int64 length = 3; optional int64 lastEntryId = 4; enum State { OPEN = 1; IN_RECOVERY = 2; CLOSED = 3; } required State state = 5 [default = OPEN]; message Segment { repeated string ensembleMember = 1; required int64 firstEntryId = 2; } repeated Segment segment = 6; enum DigestType { CRC32 = 1; HMAC = 2; } optional DigestType digestType = 7; optional bytes password = 8; optional int32 ackQuorumSize = 9; } message LedgerRereplicationLayoutFormat { required string type = 1; required int32 version = 2; } message UnderreplicatedLedgerFormat { repeated string replica = 1; } /** * Cookie format for storing cookie information */ message CookieFormat { required string bookieHost = 1; required string journalDir = 2; required string ledgerDirs = 3; optional string instanceId = 4; } /** * Debug information for locks */ message LockDataFormat { optional string bookieId = 1; } /** * Debug information for auditor votes */ message AuditorVoteFormat { optional string bookieId = 1; }bookkeeper-release-4.2.4/bookkeeper-server/src/main/resources/000077500000000000000000000000001244507361200244505ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/main/resources/LICENSE.bin.txt000066400000000000000000000372731244507361200270560ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ------------------------------------------------------------------------------------ For lib/slf4j-*.jar Copyright (c) 2004-2011 QOS.ch All rights reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ------------------------------------------------------------------------------------ For lib/protobuf-java-*.jar Copyright 2008, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Code generated by the Protocol Buffer compiler is owned by the owner of the input file used when generating it. This code is not standalone and requires a support library to be linked with it. This support library is itself covered by the above license. ------------------------------------------------------------------------------------ For lib/jline-*.jar Copyright (c) 2002-2006, Marc Prud'hommeaux All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of JLine nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. bookkeeper-release-4.2.4/bookkeeper-server/src/main/resources/NOTICE.bin.txt000066400000000000000000000031261244507361200267430ustar00rootroot00000000000000Apache BookKeeper Copyright 2011-2014 The Apache Software Foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This project includes: Apache Log4j under The Apache Software License, Version 2.0 Commons BeanUtils Core under The Apache Software License, Version 2.0 Commons CLI under The Apache Software License, Version 2.0 Commons Codec under The Apache Software License, Version 2.0 Commons Collections under The Apache Software License, Version 2.0 Commons Configuration under The Apache Software License, Version 2.0 Commons IO under The Apache Software License, Version 2.0 Commons Lang under The Apache Software License, Version 2.0 Commons Logging under The Apache Software License, Version 2.0 commons Beanutils under Apache License, Version 2.0 Commons Digester under The Apache Software License, Version 2.0 JLine under BSD SLF4J API Module under MIT License SLF4J LOG4J-12 Binding under MIT License The Netty Project under Apache License, Version 2.0 ZooKeeper under Apache License, Version 2.0 Protocol Buffer Java API under New BSD license Guava under The Apache Software License, Version 2.0 bookkeeper-release-4.2.4/bookkeeper-server/src/main/resources/findbugsExclude.xml000066400000000000000000000017671244507361200303200ustar00rootroot00000000000000 bookkeeper-release-4.2.4/bookkeeper-server/src/test/000077500000000000000000000000001244507361200224715ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/000077500000000000000000000000001244507361200234125ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/000077500000000000000000000000001244507361200242015ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/000077500000000000000000000000001244507361200254225ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/000077500000000000000000000000001244507361200275505ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/000077500000000000000000000000001244507361200310205ustar00rootroot00000000000000BookieAccessor.java000066400000000000000000000021761244507361200345050ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.IOException; /** * Accessor class to avoid making Bookie internals public */ public class BookieAccessor { /** * Force a bookie to flush its ledger storage */ public static void forceFlush(Bookie b) throws IOException { b.ledgerStorage.flush(); } }BookieInitializationTest.java000066400000000000000000000322251244507361200365700ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import static org.junit.Assert.fail; import java.io.File; import java.io.IOException; import java.net.BindException; import java.net.InetAddress; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import junit.framework.Assert; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.test.ZooKeeperUtil; import org.apache.commons.io.FileUtils; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.data.Stat; import org.apache.zookeeper.KeeperException; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Testing bookie initialization cases */ public class BookieInitializationTest { private static final Logger LOG = LoggerFactory .getLogger(BookieInitializationTest.class); ZooKeeperUtil zkutil; ZooKeeper zkc = null; ZooKeeper newzk = null; @Before public void setupZooKeeper() throws Exception { zkutil = new ZooKeeperUtil(); zkutil.startServer(); zkc = zkutil.getZooKeeperClient(); } @After public void tearDownZooKeeper() throws Exception { if (newzk != null) { newzk.close(); } zkutil.killServer(); } private static class MockBookie extends Bookie { MockBookie(ServerConfiguration conf) throws IOException, KeeperException, InterruptedException, BookieException { super(conf); } void testRegisterBookie(ServerConfiguration conf) throws IOException { super.registerBookie(conf); } } /** * Verify the bookie server exit code. On ZooKeeper exception, should return * exit code ZK_REG_FAIL = 4 */ @Test(timeout = 20000) public void testExitCodeZK_REG_FAIL() throws Exception { File tmpDir = File.createTempFile("bookie", "test"); tmpDir.delete(); tmpDir.mkdir(); final ServerConfiguration conf = new ServerConfiguration() .setZkServers(null).setJournalDirName(tmpDir.getPath()) .setAllowLoopback(true).setLedgerDirNames(new String[] { tmpDir.getPath() }); // simulating ZooKeeper exception by assigning a closed zk client to bk BookieServer bkServer = new BookieServer(conf) { protected Bookie newBookie(ServerConfiguration conf) throws IOException, KeeperException, InterruptedException, BookieException { MockBookie bookie = new MockBookie(conf); bookie.zk = zkc; zkc.close(); return bookie; }; }; bkServer.start(); bkServer.join(); Assert.assertEquals("Failed to return ExitCode.ZK_REG_FAIL", ExitCode.ZK_REG_FAIL, bkServer.getExitCode()); } /** * Verify the bookie reg. Restarting bookie server will wait for the session * timeout when previous reg node exists in zk. On zNode delete event, * should continue startup */ @Test(timeout = 20000) public void testBookieRegistration() throws Exception { File tmpDir = File.createTempFile("bookie", "test"); tmpDir.delete(); tmpDir.mkdir(); final ServerConfiguration conf = new ServerConfiguration() .setZkServers(null).setJournalDirName(tmpDir.getPath()) .setAllowLoopback(true).setLedgerDirNames(new String[] { tmpDir.getPath() }); final String bkRegPath = conf.getZkAvailableBookiesPath() + "/" + InetAddress.getLocalHost().getHostAddress() + ":" + conf.getBookiePort(); MockBookie b = new MockBookie(conf); b.zk = zkc; b.testRegisterBookie(conf); Stat bkRegNode1 = zkc.exists(bkRegPath, false); Assert.assertNotNull("Bookie registration node doesn't exists!", bkRegNode1); // simulating bookie restart, on restart bookie will create new // zkclient and doing the registration. createNewZKClient(); b.zk = newzk; // deleting the znode, so that the bookie registration should // continue successfully on NodeDeleted event new Thread() { @Override public void run() { try { Thread.sleep(conf.getZkTimeout() / 3); zkc.delete(bkRegPath, -1); } catch (Exception e) { // Not handling, since the testRegisterBookie will fail LOG.error("Failed to delete the znode :" + bkRegPath, e); } } }.start(); try { b.testRegisterBookie(conf); } catch (IOException e) { Throwable t = e.getCause(); if (t instanceof KeeperException) { KeeperException ke = (KeeperException) t; Assert.assertTrue("ErrorCode:" + ke.code() + ", Registration node exists", ke.code() != KeeperException.Code.NODEEXISTS); } throw e; } // verify ephemeral owner of the bkReg znode Stat bkRegNode2 = newzk.exists(bkRegPath, false); Assert.assertNotNull("Bookie registration has been failed", bkRegNode2); Assert.assertTrue("Bookie is referring to old registration znode:" + bkRegNode1 + ", New ZNode:" + bkRegNode2, bkRegNode1 .getEphemeralOwner() != bkRegNode2.getEphemeralOwner()); } /** * Verify the bookie registration, it should throw * KeeperException.NodeExistsException if the znode still exists even after * the zk session timeout. */ @Test(timeout = 30000) public void testRegNodeExistsAfterSessionTimeOut() throws Exception { File tmpDir = File.createTempFile("bookie", "test"); tmpDir.delete(); tmpDir.mkdir(); ServerConfiguration conf = new ServerConfiguration().setZkServers(null) .setAllowLoopback(true) .setJournalDirName(tmpDir.getPath()) .setLedgerDirNames(new String[] { tmpDir.getPath() }); String bkRegPath = conf.getZkAvailableBookiesPath() + "/" + InetAddress.getLocalHost().getHostAddress() + ":" + conf.getBookiePort(); MockBookie b = new MockBookie(conf); b.zk = zkc; b.testRegisterBookie(conf); Stat bkRegNode1 = zkc.exists(bkRegPath, false); Assert.assertNotNull("Bookie registration node doesn't exists!", bkRegNode1); // simulating bookie restart, on restart bookie will create new // zkclient and doing the registration. createNewZKClient(); b.zk = newzk; try { b.testRegisterBookie(conf); fail("Should throw NodeExistsException as the znode is not getting expired"); } catch (IOException e) { Throwable t = e.getCause(); if (t instanceof KeeperException) { KeeperException ke = (KeeperException) t; Assert.assertTrue("ErrorCode:" + ke.code() + ", Registration node doesn't exists", ke.code() == KeeperException.Code.NODEEXISTS); // verify ephemeral owner of the bkReg znode Stat bkRegNode2 = newzk.exists(bkRegPath, false); Assert.assertNotNull("Bookie registration has been failed", bkRegNode2); Assert.assertTrue( "Bookie wrongly registered. Old registration znode:" + bkRegNode1 + ", New znode:" + bkRegNode2, bkRegNode1.getEphemeralOwner() == bkRegNode2 .getEphemeralOwner()); return; } throw e; } } /** * Verify duplicate bookie server startup. Should throw * java.net.BindException if already BK server is running */ @Test(timeout = 20000) public void testDuplicateBookieServerStartup() throws Exception { File tmpDir = File.createTempFile("bookie", "test"); tmpDir.delete(); tmpDir.mkdir(); ServerConfiguration conf = new ServerConfiguration(); int port = 12555; conf.setZkServers(null).setBookiePort(port) .setAllowLoopback(true) .setJournalDirName(tmpDir.getPath()) .setLedgerDirNames(new String[] { tmpDir.getPath() }); BookieServer bs1 = new BookieServer(conf); bs1.start(); // starting bk server with same conf try { BookieServer bs2 = new BookieServer(conf); bs2.start(); fail("Should throw BindException, as the bk server is already running!"); } catch (BindException be) { Assert.assertTrue("BKServer allowed duplicate startups!", be .getMessage().contains("Address already in use")); } } /** * Verify bookie start behaviour when ZK Server is not running. */ @Test(timeout = 20000) public void testStartBookieWithoutZKServer() throws Exception { zkutil.killServer(); File tmpDir = File.createTempFile("bookie", "test"); tmpDir.delete(); tmpDir.mkdir(); final ServerConfiguration conf = new ServerConfiguration() .setZkServers(zkutil.getZooKeeperConnectString()) .setZkTimeout(5000).setJournalDirName(tmpDir.getPath()) .setAllowLoopback(true).setLedgerDirNames(new String[] { tmpDir.getPath() }); try { new Bookie(conf); fail("Should throw ConnectionLossException as ZKServer is not running!"); } catch (KeeperException.ConnectionLossException e) { // expected behaviour } finally { FileUtils.deleteDirectory(tmpDir); } } /** * Verify that if I try to start a bookie without zk initialized, it won't * prevent me from starting the bookie when zk is initialized */ @Test(timeout = 20000) public void testStartBookieWithoutZKInitialized() throws Exception { File tmpDir = File.createTempFile("bookie", "test"); tmpDir.delete(); tmpDir.mkdir(); final String ZK_ROOT = "/ledgers2"; final ServerConfiguration conf = new ServerConfiguration() .setZkServers(zkutil.getZooKeeperConnectString()) .setZkTimeout(5000).setJournalDirName(tmpDir.getPath()) .setAllowLoopback(true) .setLedgerDirNames(new String[] { tmpDir.getPath() }); conf.setZkLedgersRootPath(ZK_ROOT); try { try { new Bookie(conf); fail("Should throw NoNodeException"); } catch (Exception e) { // shouldn't be able to start } ClientConfiguration clientConf = new ClientConfiguration(); clientConf.setZkServers(zkutil.getZooKeeperConnectString()); clientConf.setZkLedgersRootPath(ZK_ROOT); BookKeeperAdmin.format(clientConf, false, false); Bookie b = new Bookie(conf); b.shutdown(); } finally { FileUtils.deleteDirectory(tmpDir); } } private void createNewZKClient() throws Exception { // create a zookeeper client LOG.debug("Instantiate ZK Client"); final CountDownLatch latch = new CountDownLatch(1); newzk = new ZooKeeper(zkutil.getZooKeeperConnectString(), 10000, new Watcher() { @Override public void process(WatchedEvent event) { // handle session disconnects and expires if (event.getState().equals( Watcher.Event.KeeperState.SyncConnected)) { latch.countDown(); } } }); if (!latch.await(10000, TimeUnit.MILLISECONDS)) { newzk.close(); fail("Could not connect to zookeeper server"); } } } BookieJournalTest.java000066400000000000000000000517451244507361200352230ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookiepackage org.apache.bookkeeper.bookie; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.File; import java.io.RandomAccessFile; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.util.ArrayList; import java.util.Enumeration; import java.util.Random; import java.util.Set; import java.util.Arrays; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeperTestClient; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.ClientUtil; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.AsyncCallback.ReadCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.proto.BookieServer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.junit.Before; import org.junit.Test; import static org.junit.Assert.*; public class BookieJournalTest { static Logger LOG = LoggerFactory.getLogger(BookieJournalTest.class); final Random r = new Random(System.currentTimeMillis()); private void writeIndexFileForLedger(File indexDir, long ledgerId, byte[] masterKey) throws Exception { File fn = new File(indexDir, LedgerCacheImpl.getLedgerName(ledgerId)); fn.getParentFile().mkdirs(); FileInfo fi = new FileInfo(fn, masterKey); // force creation of index file fi.write(new ByteBuffer[]{ ByteBuffer.allocate(0) }, 0); fi.close(true); } private void writePartialIndexFileForLedger(File indexDir, long ledgerId, byte[] masterKey, boolean truncateToMasterKey) throws Exception { File fn = new File(indexDir, LedgerCacheImpl.getLedgerName(ledgerId)); fn.getParentFile().mkdirs(); FileInfo fi = new FileInfo(fn, masterKey); // force creation of index file fi.write(new ByteBuffer[]{ ByteBuffer.allocate(0) }, 0); fi.close(true); // file info header int headerLen = 8 + 4 + masterKey.length; // truncate the index file int leftSize; if (truncateToMasterKey) { leftSize = r.nextInt(headerLen); } else { leftSize = headerLen + r.nextInt(1024 - headerLen); } FileChannel fc = new RandomAccessFile(fn, "rw").getChannel(); fc.truncate(leftSize); fc.close(); } /** * Generate meta entry with given master key */ private ByteBuffer generateMetaEntry(long ledgerId, byte[] masterKey) { ByteBuffer bb = ByteBuffer.allocate(8 + 8 + 4 + masterKey.length); bb.putLong(ledgerId); bb.putLong(Bookie.METAENTRY_ID_LEDGER_KEY); bb.putInt(masterKey.length); bb.put(masterKey); bb.flip(); return bb; } private void writeJunkJournal(File journalDir) throws Exception { long logId = System.currentTimeMillis(); File fn = new File(journalDir, Long.toHexString(logId) + ".txn"); FileChannel fc = new RandomAccessFile(fn, "rw").getChannel(); ByteBuffer zeros = ByteBuffer.allocate(512); fc.write(zeros, 4*1024*1024); fc.position(0); for (int i = 1; i <= 10; i++) { fc.write(ByteBuffer.wrap("JunkJunkJunk".getBytes())); } } private void writePreV2Journal(File journalDir, int numEntries) throws Exception { long logId = System.currentTimeMillis(); File fn = new File(journalDir, Long.toHexString(logId) + ".txn"); FileChannel fc = new RandomAccessFile(fn, "rw").getChannel(); ByteBuffer zeros = ByteBuffer.allocate(512); fc.write(zeros, 4*1024*1024); fc.position(0); byte[] data = "JournalTestData".getBytes(); long lastConfirmed = LedgerHandle.INVALID_ENTRY_ID; for (int i = 1; i <= numEntries; i++) { ByteBuffer packet = ClientUtil.generatePacket(1, i, lastConfirmed, i*data.length, data).toByteBuffer(); lastConfirmed = i; ByteBuffer lenBuff = ByteBuffer.allocate(4); lenBuff.putInt(packet.remaining()); lenBuff.flip(); fc.write(lenBuff); fc.write(packet); } } private JournalChannel writePostV2Journal(File journalDir, int numEntries) throws Exception { long logId = System.currentTimeMillis(); JournalChannel jc = new JournalChannel(journalDir, logId); BufferedChannel bc = jc.getBufferedChannel(); byte[] data = new byte[1024]; Arrays.fill(data, (byte)'X'); long lastConfirmed = LedgerHandle.INVALID_ENTRY_ID; for (int i = 1; i <= numEntries; i++) { ByteBuffer packet = ClientUtil.generatePacket(1, i, lastConfirmed, i*data.length, data).toByteBuffer(); lastConfirmed = i; ByteBuffer lenBuff = ByteBuffer.allocate(4); lenBuff.putInt(packet.remaining()); lenBuff.flip(); bc.write(lenBuff); bc.write(packet); } bc.flush(true); return jc; } private JournalChannel writePostV3Journal(File journalDir, int numEntries, byte[] masterKey) throws Exception { long logId = System.currentTimeMillis(); JournalChannel jc = new JournalChannel(journalDir, logId); BufferedChannel bc = jc.getBufferedChannel(); byte[] data = new byte[1024]; Arrays.fill(data, (byte)'X'); long lastConfirmed = LedgerHandle.INVALID_ENTRY_ID; for (int i = 0; i <= numEntries; i++) { ByteBuffer packet; if (i == 0) { packet = generateMetaEntry(1, masterKey); } else { packet = ClientUtil.generatePacket(1, i, lastConfirmed, i*data.length, data).toByteBuffer(); } lastConfirmed = i; ByteBuffer lenBuff = ByteBuffer.allocate(4); lenBuff.putInt(packet.remaining()); lenBuff.flip(); bc.write(lenBuff); bc.write(packet); } bc.flush(true); return jc; } /** * test that we can open a journal written without the magic * word at the start. This is for versions of bookkeeper before * the magic word was introduced */ @Test(timeout=60000) public void testPreV2Journal() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(ledgerDir)); writePreV2Journal(Bookie.getCurrentDirectory(journalDir), 100); writeIndexFileForLedger(Bookie.getCurrentDirectory(ledgerDir), 1, "testPasswd".getBytes()); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); Bookie b = new Bookie(conf); b.readJournal(); b.readEntry(1, 100); try { b.readEntry(1, 101); fail("Shouldn't have found entry 101"); } catch (Bookie.NoEntryException e) { // correct behaviour } b.shutdown(); } /** * Test that if the journal is all journal, we can not * start the bookie. An admin should look to see what has * happened in this case */ @Test(timeout=60000) public void testAllJunkJournal() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(ledgerDir)); writeJunkJournal(Bookie.getCurrentDirectory(journalDir)); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); Bookie b = null; try { b = new Bookie(conf); fail("Shouldn't have been able to start without admin"); } catch (Throwable t) { // correct behaviour } finally { if (b != null) { b.shutdown(); } } } /** * Test that we can start with an empty journal. * This can happen if the bookie crashes between creating the * journal and writing the magic word. It could also happen before * the magic word existed, if the bookie started but nothing was * ever written. */ @Test(timeout=60000) public void testEmptyJournal() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(ledgerDir)); writePreV2Journal(Bookie.getCurrentDirectory(journalDir), 0); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); Bookie b = new Bookie(conf); } /** * Test that a journal can load if only the magic word and * version are there. */ @Test(timeout=60000) public void testHeaderOnlyJournal() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(ledgerDir)); writePostV2Journal(Bookie.getCurrentDirectory(journalDir), 0); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); Bookie b = new Bookie(conf); } /** * Test that if a journal has junk at the end, it does not load. * If the journal is corrupt like this, admin intervention is needed */ @Test(timeout=60000) public void testJunkEndedJournal() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(ledgerDir)); JournalChannel jc = writePostV2Journal(Bookie.getCurrentDirectory(journalDir), 0); jc.getBufferedChannel().write(ByteBuffer.wrap("JunkJunkJunk".getBytes())); jc.getBufferedChannel().flush(true); writeIndexFileForLedger(ledgerDir, 1, "testPasswd".getBytes()); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); Bookie b = null; try { b = new Bookie(conf); } catch (Throwable t) { // correct behaviour } } /** * Test that if the bookie crashes while writing the length * of an entry, that we can recover. * * This is currently not the case, which is bad as recovery * should be fine here. The bookie has crashed while writing * but so the client has not be notified of success. */ @Test(timeout=60000) public void testTruncatedInLenJournal() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(ledgerDir)); JournalChannel jc = writePostV2Journal( Bookie.getCurrentDirectory(journalDir), 100); ByteBuffer zeros = ByteBuffer.allocate(2048); jc.fc.position(jc.getBufferedChannel().position() - 0x429); jc.fc.write(zeros); jc.fc.force(false); writeIndexFileForLedger(Bookie.getCurrentDirectory(ledgerDir), 1, "testPasswd".getBytes()); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); Bookie b = new Bookie(conf); b.readJournal(); b.readEntry(1, 99); try { b.readEntry(1, 100); fail("Shouldn't have found entry 100"); } catch (Bookie.NoEntryException e) { // correct behaviour } } /** * Test that if the bookie crashes in the middle of writing * the actual entry it can recover. * In this case the entry will be available, but it will corrupt. * This is ok, as the client will disregard the entry after looking * at its checksum. */ @Test(timeout=60000) public void testTruncatedInEntryJournal() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(ledgerDir)); JournalChannel jc = writePostV2Journal( Bookie.getCurrentDirectory(journalDir), 100); ByteBuffer zeros = ByteBuffer.allocate(2048); jc.fc.position(jc.getBufferedChannel().position() - 0x300); jc.fc.write(zeros); jc.fc.force(false); writeIndexFileForLedger(Bookie.getCurrentDirectory(ledgerDir), 1, "testPasswd".getBytes()); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); Bookie b = new Bookie(conf); b.readJournal(); b.readEntry(1, 99); // still able to read last entry, but it's junk ByteBuffer buf = b.readEntry(1, 100); assertEquals("Ledger Id is wrong", buf.getLong(), 1); assertEquals("Entry Id is wrong", buf.getLong(), 100); assertEquals("Last confirmed is wrong", buf.getLong(), 99); assertEquals("Length is wrong", buf.getLong(), 100*1024); buf.getLong(); // skip checksum boolean allX = true; for (int i = 0; i < 1024; i++) { byte x = buf.get(); allX = allX && x == (byte)'X'; } assertFalse("Some of buffer should have been zeroed", allX); try { b.readEntry(1, 101); fail("Shouldn't have found entry 101"); } catch (Bookie.NoEntryException e) { // correct behaviour } } /** * Test partial index (truncate master key) with pre-v3 journals */ @Test(timeout=60000) public void testPartialFileInfoPreV3Journal1() throws Exception { testPartialFileInfoPreV3Journal(true); } /** * Test partial index with pre-v3 journals */ @Test(timeout=60000) public void testPartialFileInfoPreV3Journal2() throws Exception { testPartialFileInfoPreV3Journal(false); } /** * Test partial index file with pre-v3 journals. */ private void testPartialFileInfoPreV3Journal(boolean truncateMasterKey) throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); writePreV2Journal(Bookie.getCurrentDirectory(journalDir), 100); writePartialIndexFileForLedger(Bookie.getCurrentDirectory(ledgerDir), 1, "testPasswd".getBytes(), truncateMasterKey); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); if (truncateMasterKey) { try { Bookie b = new Bookie(conf); b.readJournal(); fail("Should not reach here!"); } catch (IOException ie) { } } else { Bookie b = new Bookie(conf); b.readJournal(); b.readEntry(1, 100); try { b.readEntry(1, 101); fail("Shouldn't have found entry 101"); } catch (Bookie.NoEntryException e) { // correct behaviour } } } /** * Test partial index (truncate master key) with post-v3 journals */ @Test(timeout=60000) public void testPartialFileInfoPostV3Journal1() throws Exception { testPartialFileInfoPostV3Journal(true); } /** * Test partial index with post-v3 journals */ @Test(timeout=60000) public void testPartialFileInfoPostV3Journal2() throws Exception { testPartialFileInfoPostV3Journal(false); } /** * Test partial index file with post-v3 journals. */ private void testPartialFileInfoPostV3Journal(boolean truncateMasterKey) throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(journalDir)); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); Bookie.checkDirectoryStructure(Bookie.getCurrentDirectory(ledgerDir)); byte[] masterKey = "testPasswd".getBytes(); writePostV3Journal(Bookie.getCurrentDirectory(journalDir), 100, masterKey); writePartialIndexFileForLedger(Bookie.getCurrentDirectory(ledgerDir), 1, masterKey, truncateMasterKey); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(null) .setJournalDirName(journalDir.getPath()) .setLedgerDirNames(new String[] { ledgerDir.getPath() }); Bookie b = new Bookie(conf); b.readJournal(); b.readEntry(1, 100); try { b.readEntry(1, 101); fail("Shouldn't have found entry 101"); } catch (Bookie.NoEntryException e) { // correct behaviour } } } BookieShutdownTest.java000066400000000000000000000107211244507361200354110ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import java.io.IOException; import org.apache.zookeeper.KeeperException; import java.nio.ByteBuffer; import org.junit.Test; import org.junit.Assert; public class BookieShutdownTest extends BookKeeperClusterTestCase { public BookieShutdownTest() { super(1); } /** * Test whether Bookie can be shutdown when the call comes inside bookie thread. * * @throws Exception */ @Test public void testBookieShutdownFromBookieThread() throws Exception { ServerConfiguration conf = bsConfs.get(0); killBookie(0); final CountDownLatch latch = new CountDownLatch(1); final CountDownLatch shutdownComplete = new CountDownLatch(1); Bookie bookie = new Bookie(conf) { @Override public void run() { try { latch.await(); } catch (InterruptedException e) { // Ignore } triggerBookieShutdown(ExitCode.BOOKIE_EXCEPTION); } @Override synchronized int shutdown(int exitCode) { super.shutdown(exitCode); shutdownComplete.countDown(); return exitCode; } }; bookie.start(); // after 1 sec stop . Thread.sleep(1000); latch.countDown(); shutdownComplete.await(5000, TimeUnit.MILLISECONDS); } /** * Test whether bookieserver returns the correct error code when it crashes. */ @Test(timeout=60000) public void testBookieServerThreadError() throws Exception { ServerConfiguration conf = bsConfs.get(0); killBookie(0); final CountDownLatch latch = new CountDownLatch(1); final CountDownLatch shutdownComplete = new CountDownLatch(1); // simulating ZooKeeper exception by assigning a closed zk client to bk BookieServer bkServer = new BookieServer(conf) { protected Bookie newBookie(ServerConfiguration conf) throws IOException, KeeperException, InterruptedException, BookieException { return new Bookie(conf) { @Override public void addEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { throw new OutOfMemoryError(); } }; } }; bkServer.start(); LedgerHandle lh = bkc.createLedger(1, 1, BookKeeper.DigestType.CRC32, "passwd".getBytes()); lh.asyncAddEntry("test".getBytes(), new AddCallback() { @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { // dont care, only trying to trigger OOM } }, null); bkServer.join(); Assert.assertFalse("Should have died", bkServer.isRunning()); Assert.assertEquals("Should have died with server exception code", ExitCode.SERVER_EXCEPTION, bkServer.getExitCode()); } } CompactionTest.java000066400000000000000000000414601244507361200345450ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.IOException; import java.io.IOException; import java.nio.ByteBuffer; import java.util.Set; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.ConcurrentHashMap; import java.util.Collections; import java.util.Enumeration; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.util.TestUtils; import org.apache.zookeeper.AsyncCallback; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.LedgerMetadataListener; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.Processor; import org.apache.bookkeeper.versioning.Version; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.Before; import org.junit.Test; /** * This class tests the entry log compaction functionality. */ public class CompactionTest extends BookKeeperClusterTestCase { static Logger LOG = LoggerFactory.getLogger(CompactionTest.class); DigestType digestType; static int ENTRY_SIZE = 1024; static int NUM_BOOKIES = 1; int numEntries; int gcWaitTime; double minorCompactionThreshold; double majorCompactionThreshold; long minorCompactionInterval; long majorCompactionInterval; String msg; public CompactionTest() { super(NUM_BOOKIES); this.digestType = DigestType.CRC32; numEntries = 100; gcWaitTime = 1000; minorCompactionThreshold = 0.1f; majorCompactionThreshold = 0.5f; minorCompactionInterval = 2 * gcWaitTime / 1000; majorCompactionInterval = 4 * gcWaitTime / 1000; // a dummy message StringBuilder msgSB = new StringBuilder(); for (int i = 0; i < ENTRY_SIZE; i++) { msgSB.append("a"); } msg = msgSB.toString(); } @Before @Override public void setUp() throws Exception { // Set up the configuration properties needed. baseConf.setEntryLogSizeLimit(numEntries * ENTRY_SIZE); baseConf.setGcWaitTime(gcWaitTime); baseConf.setMinorCompactionThreshold(minorCompactionThreshold); baseConf.setMajorCompactionThreshold(majorCompactionThreshold); baseConf.setMinorCompactionInterval(minorCompactionInterval); baseConf.setMajorCompactionInterval(majorCompactionInterval); super.setUp(); } LedgerHandle[] prepareData(int numEntryLogs, boolean changeNum) throws Exception { // since an entry log file can hold at most 100 entries // first ledger write 2 entries, which is less than low water mark int num1 = 2; // third ledger write more than high water mark entries int num3 = (int)(numEntries * 0.7f); // second ledger write remaining entries, which is higher than low water mark // and less than high water mark int num2 = numEntries - num3 - num1; LedgerHandle[] lhs = new LedgerHandle[3]; for (int i=0; i<3; ++i) { lhs[i] = bkc.createLedger(NUM_BOOKIES, NUM_BOOKIES, digestType, "".getBytes()); } for (int n = 0; n < numEntryLogs; n++) { for (int k = 0; k < num1; k++) { lhs[0].addEntry(msg.getBytes()); } for (int k = 0; k < num2; k++) { lhs[1].addEntry(msg.getBytes()); } for (int k = 0; k < num3; k++) { lhs[2].addEntry(msg.getBytes()); } if (changeNum) { --num2; ++num3; } } return lhs; } private void verifyLedger(long lid, long startEntryId, long endEntryId) throws Exception { LedgerHandle lh = bkc.openLedger(lid, digestType, "".getBytes()); Enumeration entries = lh.readEntries(startEntryId, endEntryId); while (entries.hasMoreElements()) { LedgerEntry entry = entries.nextElement(); assertEquals(msg, new String(entry.getEntry())); } } @Test(timeout=60000) public void testDisableCompaction() throws Exception { // prepare data LedgerHandle[] lhs = prepareData(3, false); // disable compaction baseConf.setMinorCompactionThreshold(0.0f); baseConf.setMajorCompactionThreshold(0.0f); // restart bookies restartBookies(baseConf); // remove ledger2 and ledger3 // so entry log 1 and 2 would have ledger1 entries left bkc.deleteLedger(lhs[1].getId()); bkc.deleteLedger(lhs[2].getId()); LOG.info("Finished deleting the ledgers contains most entries."); Thread.sleep(baseConf.getMajorCompactionInterval() * 1000 + baseConf.getGcWaitTime()); // entry logs ([0,1].log) should not be compacted. for (File ledgerDirectory : tmpDirs) { assertTrue("Not Found entry log file ([0,1].log that should have been compacted in ledgerDirectory: " + ledgerDirectory, TestUtils.hasLogFiles(ledgerDirectory, false, 0, 1)); } } @Test(timeout=60000) public void testMinorCompaction() throws Exception { // prepare data LedgerHandle[] lhs = prepareData(3, false); for (LedgerHandle lh : lhs) { lh.close(); } // disable major compaction baseConf.setMajorCompactionThreshold(0.0f); // restart bookies restartBookies(baseConf); // remove ledger2 and ledger3 bkc.deleteLedger(lhs[1].getId()); bkc.deleteLedger(lhs[2].getId()); LOG.info("Finished deleting the ledgers contains most entries."); Thread.sleep(baseConf.getMinorCompactionInterval() * 1000 + baseConf.getGcWaitTime()); // entry logs ([0,1,2].log) should be compacted. for (File ledgerDirectory : tmpDirs) { assertFalse("Found entry log file ([0,1,2].log that should have not been compacted in ledgerDirectory: " + ledgerDirectory, TestUtils.hasLogFiles(ledgerDirectory, true, 0, 1, 2)); } // even entry log files are removed, we still can access entries for ledger1 // since those entries has been compacted to new entry log verifyLedger(lhs[0].getId(), 0, lhs[0].getLastAddConfirmed()); } @Test(timeout=60000) public void testMajorCompaction() throws Exception { // prepare data LedgerHandle[] lhs = prepareData(3, true); for (LedgerHandle lh : lhs) { lh.close(); } // disable minor compaction baseConf.setMinorCompactionThreshold(0.0f); // restart bookies restartBookies(baseConf); // remove ledger1 and ledger3 bkc.deleteLedger(lhs[0].getId()); bkc.deleteLedger(lhs[2].getId()); LOG.info("Finished deleting the ledgers contains most entries."); Thread.sleep(baseConf.getMajorCompactionInterval() * 1000 + baseConf.getGcWaitTime()); // entry logs ([0,1,2].log) should be compacted for (File ledgerDirectory : tmpDirs) { assertFalse("Found entry log file ([0,1,2].log that should have not been compacted in ledgerDirectory: " + ledgerDirectory, TestUtils.hasLogFiles(ledgerDirectory, true, 0, 1, 2)); } // even entry log files are removed, we still can access entries for ledger2 // since those entries has been compacted to new entry log verifyLedger(lhs[1].getId(), 0, lhs[1].getLastAddConfirmed()); } @Test(timeout=60000) public void testMajorCompactionAboveThreshold() throws Exception { // prepare data LedgerHandle[] lhs = prepareData(3, false); for (LedgerHandle lh : lhs) { lh.close(); } // remove ledger1 and ledger2 bkc.deleteLedger(lhs[0].getId()); bkc.deleteLedger(lhs[1].getId()); LOG.info("Finished deleting the ledgers contains less entries."); Thread.sleep(baseConf.getMajorCompactionInterval() * 1000 + baseConf.getGcWaitTime()); // entry logs ([0,1,2].log) should not be compacted for (File ledgerDirectory : tmpDirs) { assertTrue("Not Found entry log file ([1,2].log that should have been compacted in ledgerDirectory: " + ledgerDirectory, TestUtils.hasLogFiles(ledgerDirectory, false, 0, 1, 2)); } } @Test(timeout=60000) public void testCompactionSmallEntryLogs() throws Exception { // create a ledger to write a few entries LedgerHandle alh = bkc.createLedger(NUM_BOOKIES, NUM_BOOKIES, digestType, "".getBytes()); for (int i=0; i<3; i++) { alh.addEntry(msg.getBytes()); } alh.close(); // restart bookie to roll entry log files restartBookies(); // prepare data LedgerHandle[] lhs = prepareData(3, false); for (LedgerHandle lh : lhs) { lh.close(); } // remove ledger2 and ledger3 bkc.deleteLedger(lhs[1].getId()); bkc.deleteLedger(lhs[2].getId()); LOG.info("Finished deleting the ledgers contains most entries."); Thread.sleep(baseConf.getMajorCompactionInterval() * 1000 + baseConf.getGcWaitTime()); // entry logs (0.log) should not be compacted // entry logs ([1,2,3].log) should be compacted. for (File ledgerDirectory : tmpDirs) { assertTrue("Not Found entry log file ([0].log that should have been compacted in ledgerDirectory: " + ledgerDirectory, TestUtils.hasLogFiles(ledgerDirectory, true, 0)); assertFalse("Found entry log file ([1,2,3].log that should have not been compacted in ledgerDirectory: " + ledgerDirectory, TestUtils.hasLogFiles(ledgerDirectory, true, 1, 2, 3)); } // even entry log files are removed, we still can access entries for ledger1 // since those entries has been compacted to new entry log verifyLedger(lhs[0].getId(), 0, lhs[0].getLastAddConfirmed()); } /** * Test that compaction doesnt add to index without having persisted * entrylog first. This is needed because compaction doesn't go through the journal. * {@see https://issues.apache.org/jira/browse/BOOKKEEPER-530} * {@see https://issues.apache.org/jira/browse/BOOKKEEPER-664} */ @Test(timeout=60000) public void testCompactionSafety() throws Exception { tearDown(); // I dont want the test infrastructure ServerConfiguration conf = new ServerConfiguration().setAllowLoopback(true); final Set ledgers = Collections.newSetFromMap(new ConcurrentHashMap()); LedgerManager manager = new LedgerManager() { @Override public void createLedger(LedgerMetadata metadata, GenericCallback cb) { unsupported(); } @Override public void removeLedgerMetadata(long ledgerId, Version version, GenericCallback vb) { unsupported(); } @Override public void readLedgerMetadata(long ledgerId, GenericCallback readCb) { unsupported(); } @Override public void writeLedgerMetadata(long ledgerId, LedgerMetadata metadata, GenericCallback cb) { unsupported(); } @Override public void asyncProcessLedgers(Processor processor, AsyncCallback.VoidCallback finalCb, Object context, int successRc, int failureRc) { unsupported(); } @Override public void registerLedgerMetadataListener(long ledgerId, LedgerMetadataListener listener) { unsupported(); } @Override public void unregisterLedgerMetadataListener(long ledgerId, LedgerMetadataListener listener) { unsupported(); } @Override public void close() throws IOException {} void unsupported() { LOG.error("Unsupported operation called", new Exception()); throw new RuntimeException("Unsupported op"); } @Override public LedgerRangeIterator getLedgerRanges() { final AtomicBoolean hasnext = new AtomicBoolean(true); return new LedgerManager.LedgerRangeIterator() { @Override public boolean hasNext() throws IOException { return hasnext.get(); } @Override public LedgerManager.LedgerRange next() throws IOException { hasnext.set(false); return new LedgerManager.LedgerRange(ledgers); } }; } }; File tmpDir = File.createTempFile("bkTest", ".dir"); tmpDir.delete(); tmpDir.mkdir(); File curDir = Bookie.getCurrentDirectory(tmpDir); Bookie.checkDirectoryStructure(curDir); conf.setLedgerDirNames(new String[] {tmpDir.toString()}); conf.setEntryLogSizeLimit(EntryLogger.LOGFILE_HEADER_SIZE + 3 * (4+ENTRY_SIZE)); conf.setGcWaitTime(100); conf.setMinorCompactionThreshold(0.7f); conf.setMajorCompactionThreshold(0.0f); conf.setMinorCompactionInterval(1); conf.setMajorCompactionInterval(10); conf.setPageLimit(1); final byte[] KEY = "foobar".getBytes(); File log0 = new File(curDir, "0.log"); LedgerDirsManager dirs = new LedgerDirsManager(conf); assertFalse("Log shouldnt exist", log0.exists()); InterleavedLedgerStorage storage = new InterleavedLedgerStorage(conf, manager, dirs); ledgers.add(1l); ledgers.add(2l); ledgers.add(3l); storage.setMasterKey(1, KEY); storage.setMasterKey(2, KEY); storage.setMasterKey(3, KEY); storage.addEntry(genEntry(1, 1, ENTRY_SIZE)); storage.addEntry(genEntry(2, 1, ENTRY_SIZE)); storage.addEntry(genEntry(2, 2, ENTRY_SIZE)); storage.addEntry(genEntry(3, 2, ENTRY_SIZE)); storage.flush(); storage.shutdown(); assertTrue("Log should exist", log0.exists()); ledgers.remove(2l); ledgers.remove(3l); storage = new InterleavedLedgerStorage(conf, manager, dirs); storage.start(); for (int i = 0; i < 10; i++) { if (!log0.exists()) { break; } Thread.sleep(1000); storage.entryLogger.flush(); // simulate sync thread } assertFalse("Log shouldnt exist", log0.exists()); ledgers.add(4l); storage.setMasterKey(4, KEY); storage.addEntry(genEntry(4, 1, ENTRY_SIZE)); // force ledger 1 page to flush storage = new InterleavedLedgerStorage(conf, manager, dirs); storage.getEntry(1, 1); // entry should exist } private ByteBuffer genEntry(long ledger, long entry, int size) { byte[] data = new byte[size]; ByteBuffer bb = ByteBuffer.wrap(new byte[size]); bb.putLong(ledger); bb.putLong(entry); while (bb.hasRemaining()) { bb.put((byte)0xFF); } bb.flip(); return bb; } } CookieTest.java000066400000000000000000000314751244507361200336670ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import org.apache.commons.io.FileUtils; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.test.ZooKeeperUtil; import org.apache.bookkeeper.test.PortManager; import org.apache.zookeeper.ZooKeeper; import java.io.File; import java.io.IOException; import org.junit.Test; import org.junit.After; import org.junit.Before; import static org.junit.Assert.*; import static org.apache.bookkeeper.bookie.UpgradeTest.*; public class CookieTest { ZooKeeperUtil zkutil; ZooKeeper zkc = null; final int bookiePort = PortManager.nextFreePort(); @Before public void setupZooKeeper() throws Exception { zkutil = new ZooKeeperUtil(); zkutil.startServer(); zkc = zkutil.getZooKeeperClient(); } @After public void tearDownZooKeeper() throws Exception { zkutil.killServer(); } private static String newDirectory() throws IOException { return newDirectory(true); } private static String newDirectory(boolean createCurDir) throws IOException { File d = File.createTempFile("bookie", "tmpdir"); d.delete(); d.mkdirs(); if (createCurDir) { new File(d, "current").mkdirs(); } return d.getPath(); } /** * Test starting bookie with clean state. */ @Test(timeout=60000) public void testCleanStart() throws Exception { ServerConfiguration conf = new ServerConfiguration() .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(newDirectory(false)) .setAllowLoopback(true) .setLedgerDirNames(new String[] { newDirectory(false) }) .setBookiePort(bookiePort); try { Bookie b = new Bookie(conf); } catch (Exception e) { fail("Should not reach here."); } } /** * Test that if a zookeeper cookie * is different to a local cookie, the bookie * will fail to start */ @Test(timeout=60000) public void testBadJournalCookie() throws Exception { ServerConfiguration conf1 = new ServerConfiguration() .setAllowLoopback(true) .setJournalDirName(newDirectory()) .setLedgerDirNames(new String[] { newDirectory() }) .setBookiePort(bookiePort); Cookie c = Cookie.generateCookie(conf1); c.writeToZooKeeper(zkc, conf1); String journalDir = newDirectory(); String ledgerDir = newDirectory(); ServerConfiguration conf2 = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(journalDir) .setLedgerDirNames(new String[] { ledgerDir }) .setBookiePort(bookiePort); Cookie c2 = Cookie.generateCookie(conf2); c2.writeToDirectory(new File(journalDir, "current")); c2.writeToDirectory(new File(ledgerDir, "current")); try { Bookie b = new Bookie(conf2); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour } } /** * Test that if a directory is removed from * the configuration, the bookie will fail to * start */ @Test(timeout=60000) public void testDirectoryMissing() throws Exception { String[] ledgerDirs = new String[] { newDirectory(), newDirectory(), newDirectory() }; String journalDir = newDirectory(); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(journalDir) .setLedgerDirNames(ledgerDirs) .setBookiePort(bookiePort); Bookie b = new Bookie(conf); // should work fine b.start(); b.shutdown(); conf.setLedgerDirNames(new String[] { ledgerDirs[0], ledgerDirs[1] }); try { Bookie b2 = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour } conf.setJournalDirName(newDirectory()).setLedgerDirNames(ledgerDirs); try { Bookie b2 = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour } conf.setJournalDirName(journalDir); b = new Bookie(conf); b.start(); b.shutdown(); } /** * Test that if a directory is added to a * preexisting bookie, the bookie will fail * to start */ @Test(timeout=60000) public void testDirectoryAdded() throws Exception { String ledgerDir0 = newDirectory(); String journalDir = newDirectory(); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(journalDir) .setLedgerDirNames(new String[] { ledgerDir0 }) .setBookiePort(bookiePort); Bookie b = new Bookie(conf); // should work fine b.start(); b.shutdown(); conf.setLedgerDirNames(new String[] { ledgerDir0, newDirectory() }); try { Bookie b2 = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour } conf.setLedgerDirNames(new String[] { ledgerDir0 }); b = new Bookie(conf); b.start(); b.shutdown(); } /** * Test that if a directory's contents * are emptied, the bookie will fail to start */ @Test(timeout=60000) public void testDirectoryCleared() throws Exception { String ledgerDir0 = newDirectory(); String journalDir = newDirectory(); ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(journalDir) .setLedgerDirNames(new String[] { ledgerDir0 , newDirectory() }) .setBookiePort(bookiePort); Bookie b = new Bookie(conf); // should work fine b.start(); b.shutdown(); FileUtils.deleteDirectory(new File(ledgerDir0)); try { Bookie b2 = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour } } /** * Test that if a bookie's port is changed * the bookie will fail to start */ @Test(timeout=60000) public void testBookiePortChanged() throws Exception { ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(newDirectory()) .setLedgerDirNames(new String[] { newDirectory() , newDirectory() }) .setBookiePort(bookiePort); Bookie b = new Bookie(conf); // should work fine b.start(); b.shutdown(); conf.setBookiePort(3182); try { b = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour } } /** * Test that if a bookie tries to start * with the address of a bookie which has already * existed in the system, then the bookie will fail * to start */ @Test(timeout=60000) public void testNewBookieStartingWithAnotherBookiesPort() throws Exception { ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(newDirectory()) .setLedgerDirNames(new String[] { newDirectory() , newDirectory() }) .setBookiePort(bookiePort); Bookie b = new Bookie(conf); // should work fine b.start(); b.shutdown(); conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(newDirectory()) .setLedgerDirNames(new String[] { newDirectory() , newDirectory() }) .setBookiePort(bookiePort); try { b = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour } } /* * Test Cookie verification with format. */ @Test(timeout=60000) public void testVerifyCookieWithFormat() throws Exception { ClientConfiguration adminConf = new ClientConfiguration() .setZkServers(zkutil.getZooKeeperConnectString()); adminConf.setProperty("bookkeeper.format", true); // Format the BK Metadata and generate INSTANCEID BookKeeperAdmin.format(adminConf, false, true); ServerConfiguration bookieConf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(newDirectory(false)) .setLedgerDirNames(new String[] { newDirectory(false) }) .setBookiePort(bookiePort); // Bookie should start successfully for fresh env. new Bookie(bookieConf); // Format metadata one more time. BookKeeperAdmin.format(adminConf, false, true); try { new Bookie(bookieConf); fail("Bookie should not start with previous instance id."); } catch (BookieException.InvalidCookieException e) { assertTrue( "Bookie startup should fail because of invalid instance id", e.getMessage().contains("instanceId")); } // Now format the Bookie and restart. Bookie.format(bookieConf, false, true); // After bookie format bookie should be able to start again. new Bookie(bookieConf); } /** * Test that if a bookie is started with directories with * version 2 data, that it will fail to start (it needs upgrade) */ @Test(timeout=60000) public void testV2data() throws Exception { ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(newV2JournalDirectory()) .setLedgerDirNames(new String[] { newV2LedgerDirectory() }) .setBookiePort(bookiePort); try { Bookie b = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour assertTrue("wrong exception", ice.getCause().getMessage().contains("upgrade needed")); } } /** * Test that if a bookie is started with directories with * version 1 data, that it will fail to start (it needs upgrade) */ @Test(timeout=60000) public void testV1data() throws Exception { ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(newV1JournalDirectory()) .setLedgerDirNames(new String[] { newV1LedgerDirectory() }) .setBookiePort(bookiePort); try { Bookie b = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException ice) { // correct behaviour assertTrue("wrong exception", ice.getCause().getMessage().contains("upgrade needed")); } } } CreateNewLogTest.java000066400000000000000000000066131244507361200347710ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.IOException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.junit.Test; import org.junit.After; import org.junit.Before; import junit.framework.Assert; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class CreateNewLogTest { private static final Logger LOG = LoggerFactory .getLogger(CreateNewLogTest.class); private String[] ledgerDirs; private int numDirs = 100; @Before public void setUp() throws Exception{ ledgerDirs = new String[numDirs]; for(int i = 0; i < numDirs; i++){ File temp = File.createTempFile("bookie", "test"); temp.delete(); temp.mkdir(); File currentTemp = new File(temp.getAbsoluteFile() + "/current"); currentTemp.mkdir(); ledgerDirs[i] = temp.getPath(); } } @After public void tearDown() throws Exception{ for(int i = 0; i < numDirs; i++){ File f = new File(ledgerDirs[i]); deleteRecursive(f); } } private void deleteRecursive(File f) { if (f.isDirectory()){ for (File c : f.listFiles()){ deleteRecursive(c); } } f.delete(); } /** * Checks if new log file id is verified against all directories. * * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-465} * * @throws Exception */ @Test(timeout=60000) public void testCreateNewLog() throws Exception { ServerConfiguration conf = new ServerConfiguration(); // Creating a new configuration with a number of // ledger directories. conf.setLedgerDirNames(ledgerDirs); conf.setAllowLoopback(true); LedgerDirsManager ledgerDirsManager = new LedgerDirsManager(conf); EntryLogger el = new EntryLogger(conf, ledgerDirsManager); // Extracted from createNewLog() String logFileName = Long.toHexString(1) + ".log"; File dir = ledgerDirsManager.pickRandomWritableDir(); LOG.info("Picked this directory: " + dir); File newLogFile = new File(dir, logFileName); newLogFile.createNewFile(); // Calls createNewLog, and with the number of directories we // are using, if it picks one at random it will fail. el.createNewLog(); LOG.info("This is the current log id: " + el.getCurrentLogId()); Assert.assertTrue("Wrong log id", el.getCurrentLogId() > 1); } } EntryLogTest.java000066400000000000000000000212771244507361200342200ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.ByteBuffer; import junit.framework.TestCase; import org.apache.bookkeeper.bookie.GarbageCollectorThread.EntryLogMetadata; import org.apache.bookkeeper.bookie.GarbageCollectorThread.ExtractionScanner; import org.apache.bookkeeper.conf.ServerConfiguration; import org.junit.After; import org.junit.Assert; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class EntryLogTest extends TestCase { static Logger LOG = LoggerFactory.getLogger(EntryLogTest.class); @Before public void setUp() throws Exception { } @Test(timeout=60000) public void testCorruptEntryLog() throws Exception { File tmpDir = File.createTempFile("bkTest", ".dir"); tmpDir.delete(); tmpDir.mkdir(); File curDir = Bookie.getCurrentDirectory(tmpDir); Bookie.checkDirectoryStructure(curDir); int gcWaitTime = 1000; ServerConfiguration conf = new ServerConfiguration(); conf.setAllowLoopback(true); conf.setGcWaitTime(gcWaitTime); conf.setLedgerDirNames(new String[] {tmpDir.toString()}); Bookie bookie = new Bookie(conf); // create some entries EntryLogger logger = ((InterleavedLedgerStorage)bookie.ledgerStorage).entryLogger; logger.addEntry(1, generateEntry(1, 1)); logger.addEntry(3, generateEntry(3, 1)); logger.addEntry(2, generateEntry(2, 1)); logger.flush(); // now lets truncate the file to corrupt the last entry, which simulates a partial write File f = new File(curDir, "0.log"); RandomAccessFile raf = new RandomAccessFile(f, "rw"); raf.setLength(raf.length()-10); raf.close(); // now see which ledgers are in the log logger = new EntryLogger(conf, bookie.getLedgerDirsManager()); EntryLogMetadata meta = new EntryLogMetadata(0L); ExtractionScanner scanner = new ExtractionScanner(meta); try { logger.scanEntryLog(0L, scanner); fail("Should not reach here!"); } catch (IOException ie) { } LOG.info("Extracted Meta From Entry Log {}", meta); assertNotNull(meta.ledgersMap.get(1L)); assertNull(meta.ledgersMap.get(2L)); assertNotNull(meta.ledgersMap.get(3L)); } private ByteBuffer generateEntry(long ledger, long entry) { byte[] data = ("ledger-" + ledger + "-" + entry).getBytes(); ByteBuffer bb = ByteBuffer.wrap(new byte[8 + 8 + data.length]); bb.putLong(ledger); bb.putLong(entry); bb.put(data); bb.flip(); return bb; } @Test(timeout=60000) public void testMissingLogId() throws Exception { File tmpDir = File.createTempFile("entryLogTest", ".dir"); tmpDir.delete(); tmpDir.mkdir(); File curDir = Bookie.getCurrentDirectory(tmpDir); Bookie.checkDirectoryStructure(curDir); ServerConfiguration conf = new ServerConfiguration(); conf.setAllowLoopback(true); conf.setLedgerDirNames(new String[] {tmpDir.toString()}); Bookie bookie = new Bookie(conf); // create some entries int numLogs = 3; int numEntries = 10; long[][] positions = new long[2*numLogs][]; for (int i=0; i seq = wlh.readEntries(0, numMsgs - 1); assertTrue("Enumeration of ledger entries has no element", seq.hasMoreElements() == true); int entryId = 0; while (seq.hasMoreElements()) { LedgerEntry e = seq.nextElement(); assertEquals(entryId, e.getEntryId()); Assert.assertArrayEquals(dummyMsg.getBytes(), e.getEntry()); ++entryId; } assertEquals(entryId, numMsgs); } @Test(timeout=60000) public void testEmptyIndexPage() throws Exception { LOG.debug("Testing EmptyIndexPage"); Bookie.SyncThread syncThread = bs.get(0).getBookie().syncThread; assertNotNull("Not found SyncThread.", syncThread); syncThread.suspendSync(); // Create a ledger LedgerHandle lh1 = bkc.createLedger(1, 1, digestType, "".getBytes()); String dummyMsg = "NoSuchLedger"; // write two page entries to ledger 2 int numMsgs = 2 * pageSize / 8; LedgerHandle lh2 = bkc.createLedger(1, 1, digestType, "".getBytes()); for (int i=0; i seq = lh2.readEntries(0, numMsgs - 1); assertTrue("Enumeration of ledger entries has no element", seq.hasMoreElements() == true); int entryId = 0; while (seq.hasMoreElements()) { LedgerEntry e = seq.nextElement(); assertEquals(entryId, e.getEntryId()); Assert.assertArrayEquals(dummyMsg.getBytes(), e.getEntry()); ++entryId; } assertEquals(entryId, numMsgs); } } LedgerCacheTest.java000066400000000000000000000455761244507361200346130ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; import org.apache.bookkeeper.bookie.Bookie.NoLedgerException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.SnapshotMap; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.LinkedBlockingQueue; import org.apache.commons.io.FileUtils; import org.junit.After; import org.junit.Assert; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import junit.framework.TestCase; /** * LedgerCache related test cases */ public class LedgerCacheTest extends TestCase { static Logger LOG = LoggerFactory.getLogger(LedgerCacheTest.class); SnapshotMap activeLedgers; LedgerManagerFactory ledgerManagerFactory; LedgerCache ledgerCache; Thread flushThread; ServerConfiguration conf; File txnDir, ledgerDir; private Bookie bookie; @Override @Before public void setUp() throws Exception { txnDir = File.createTempFile("ledgercache", "txn"); txnDir.delete(); txnDir.mkdir(); ledgerDir = File.createTempFile("ledgercache", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); // create current dir new File(ledgerDir, BookKeeperConstants.CURRENT_DIR).mkdir(); conf = new ServerConfiguration(); conf.setZkServers(null); conf.setAllowLoopback(true); conf.setJournalDirName(txnDir.getPath()); conf.setLedgerDirNames(new String[] { ledgerDir.getPath() }); bookie = new Bookie(conf); ledgerManagerFactory = LedgerManagerFactory.newLedgerManagerFactory(conf, null); activeLedgers = new SnapshotMap(); ledgerCache = ((InterleavedLedgerStorage) bookie.ledgerStorage).ledgerCache; } @Override @After public void tearDown() throws Exception { if (flushThread != null) { flushThread.interrupt(); flushThread.join(); } bookie.ledgerStorage.shutdown(); ledgerManagerFactory.uninitialize(); FileUtils.deleteDirectory(txnDir); FileUtils.deleteDirectory(ledgerDir); } private void newLedgerCache() throws IOException { if (ledgerCache != null) { ledgerCache.close(); } ledgerCache = ((InterleavedLedgerStorage) bookie.ledgerStorage).ledgerCache = new LedgerCacheImpl( conf, activeLedgers, bookie.getLedgerDirsManager()); flushThread = new Thread() { public void run() { while (true) { try { sleep(conf.getFlushInterval()); ledgerCache.flushLedger(true); } catch (InterruptedException ie) { // killed by teardown Thread.currentThread().interrupt(); return; } catch (Exception e) { LOG.error("Exception in flush thread", e); } } } }; flushThread.start(); } @Test(timeout=30000) public void testAddEntryException() throws IOException { // set page limitation conf.setPageLimit(10); // create a ledger cache newLedgerCache(); /* * Populate ledger cache. */ try { byte[] masterKey = "blah".getBytes(); for( int i = 0; i < 100; i++) { ledgerCache.setMasterKey((long)i, masterKey); ledgerCache.putEntryOffset(i, 0, i*8); } } catch (IOException e) { LOG.error("Got IOException.", e); fail("Failed to add entry."); } } @Test(timeout=30000) public void testLedgerEviction() throws Exception { int numEntries = 10; // limit open files & pages conf.setOpenFileLimit(1).setPageLimit(2) .setPageSize(8 * numEntries); // create ledger cache newLedgerCache(); try { int numLedgers = 3; byte[] masterKey = "blah".getBytes(); for (int i=1; i<=numLedgers; i++) { ledgerCache.setMasterKey((long)i, masterKey); for (int j=0; j ledgerQ = new LinkedBlockingQueue(1); final byte[] masterKey = "masterKey".getBytes(); Thread newLedgerThread = new Thread() { public void run() { try { for (int i = 0; i < 1000 && rc.get() == 0; i++) { ledgerCache.setMasterKey(i, masterKey); ledgerQ.put((long)i); } } catch (Exception e) { rc.set(-1); LOG.error("Exception in new ledger thread", e); } } }; newLedgerThread.start(); Thread flushThread = new Thread() { public void run() { try { while (true) { Long id = ledgerQ.peek(); if (id == null) { continue; } LOG.info("Put entry for {}", id); try { ledgerCache.putEntryOffset((long)id, 1, 0); } catch (Bookie.NoLedgerException nle) { //ignore } ledgerCache.flushLedger(true); } } catch (Exception e) { rc.set(-1); LOG.error("Exception in flush thread", e); } } }; flushThread.start(); Thread deleteThread = new Thread() { public void run() { try { while (true) { long id = ledgerQ.take(); LOG.info("Deleting {}", id); ledgerCache.deleteLedger(id); } } catch (Exception e) { rc.set(-1); LOG.error("Exception in delete thread", e); } } }; deleteThread.start(); newLedgerThread.join(); assertEquals("Should have been no errors", rc.get(), 0); deleteThread.interrupt(); flushThread.interrupt(); } private ByteBuffer generateEntry(long ledger, long entry) { byte[] data = ("ledger-" + ledger + "-" + entry).getBytes(); ByteBuffer bb = ByteBuffer.wrap(new byte[8 + 8 + data.length]); bb.putLong(ledger); bb.putLong(entry); bb.put(data); bb.flip(); return bb; } } TestLedgerDirsManager.java000066400000000000000000000055001244507361200357630ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.io.File; import junit.framework.TestCase; import org.apache.bookkeeper.bookie.LedgerDirsManager.NoWritableLedgerDirException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TestLedgerDirsManager extends TestCase { static Logger LOG = LoggerFactory.getLogger(TestLedgerDirsManager.class); ServerConfiguration conf; File curDir; LedgerDirsManager dirsManager; @Before public void setUp() throws Exception { File tmpDir = File.createTempFile("bkTest", ".dir"); tmpDir.delete(); tmpDir.mkdir(); curDir = Bookie.getCurrentDirectory(tmpDir); Bookie.checkDirectoryStructure(curDir); ServerConfiguration conf = new ServerConfiguration(); conf.setAllowLoopback(true); conf.setLedgerDirNames(new String[] {tmpDir.toString()}); dirsManager = new LedgerDirsManager(conf); } @Test(timeout=60000) public void testPickWritableDirExclusive() throws Exception { try { dirsManager.pickRandomWritableDir(curDir); fail("Should not reach here due to there is no writable ledger dir."); } catch (NoWritableLedgerDirException nwlde) { // expected to fail with no writable ledger dir assertTrue(true); } } @Test(timeout=60000) public void testNoWritableDir() throws Exception { try { dirsManager.addToFilledDirs(curDir); dirsManager.pickRandomWritableDir(); fail("Should not reach here due to there is no writable ledger dir."); } catch (NoWritableLedgerDirException nwlde) { // expected to fail with no writable ledger dir assertEquals("Should got NoWritableLedgerDirException w/ 'All ledger directories are non writable'.", "All ledger directories are non writable", nwlde.getMessage()); } } } UpgradeTest.java000066400000000000000000000221351244507361200340360ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/bookie/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.bookie; import java.util.Arrays; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.io.File; import java.io.IOException; import java.io.FileOutputStream; import java.io.OutputStreamWriter; import java.io.BufferedWriter; import java.io.PrintStream; import java.io.RandomAccessFile; import org.junit.Before; import org.junit.After; import org.junit.Test; import static org.junit.Assert.*; import org.apache.bookkeeper.client.ClientUtil; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.zookeeper.ZooKeeper; import org.apache.bookkeeper.test.ZooKeeperUtil; import org.apache.bookkeeper.test.PortManager; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class UpgradeTest { static Logger LOG = LoggerFactory.getLogger(FileInfo.class); ZooKeeperUtil zkutil; ZooKeeper zkc = null; final static int bookiePort = PortManager.nextFreePort(); @Before public void setupZooKeeper() throws Exception { zkutil = new ZooKeeperUtil(); zkutil.startServer(); zkc = zkutil.getZooKeeperClient(); } @After public void tearDownZooKeeper() throws Exception { zkutil.killServer(); } static void writeLedgerDir(File dir, byte[] masterKey) throws Exception { long ledgerId = 1; File fn = new File(dir, LedgerCacheImpl.getLedgerName(ledgerId)); fn.getParentFile().mkdirs(); FileInfo fi = new FileInfo(fn, masterKey); // force creation of index file fi.write(new ByteBuffer[]{ ByteBuffer.allocate(0) }, 0); fi.close(true); long logId = 0; ByteBuffer LOGFILE_HEADER = ByteBuffer.allocate(1024); LOGFILE_HEADER.put("BKLO".getBytes()); FileChannel logfile = new RandomAccessFile( new File(dir, Long.toHexString(logId)+".log"), "rw").getChannel(); logfile.write((ByteBuffer) LOGFILE_HEADER.clear()); logfile.close(); } static JournalChannel writeJournal(File journalDir, int numEntries, byte[] masterKey) throws Exception { long logId = System.currentTimeMillis(); JournalChannel jc = new JournalChannel(journalDir, logId); BufferedChannel bc = jc.getBufferedChannel(); long ledgerId = 1; byte[] data = new byte[1024]; Arrays.fill(data, (byte)'X'); long lastConfirmed = LedgerHandle.INVALID_ENTRY_ID; for (int i = 1; i <= numEntries; i++) { ByteBuffer packet = ClientUtil.generatePacket(ledgerId, i, lastConfirmed, i*data.length, data).toByteBuffer(); lastConfirmed = i; ByteBuffer lenBuff = ByteBuffer.allocate(4); lenBuff.putInt(packet.remaining()); lenBuff.flip(); bc.write(lenBuff); bc.write(packet); } bc.flush(true); return jc; } static String newV1JournalDirectory() throws Exception { File d = File.createTempFile("bookie", "tmpdir"); d.delete(); d.mkdirs(); writeJournal(d, 100, "foobar".getBytes()).close(); return d.getPath(); } static String newV1LedgerDirectory() throws Exception { File d = File.createTempFile("bookie", "tmpdir"); d.delete(); d.mkdirs(); writeLedgerDir(d, "foobar".getBytes()); return d.getPath(); } static void createVersion2File(String dir) throws Exception { File versionFile = new File(dir, "VERSION"); FileOutputStream fos = new FileOutputStream(versionFile); BufferedWriter bw = null; try { bw = new BufferedWriter(new OutputStreamWriter(fos)); bw.write(String.valueOf(2)); } finally { if (bw != null) { bw.close(); } fos.close(); } } static String newV2JournalDirectory() throws Exception { String d = newV1JournalDirectory(); createVersion2File(d); return d; } static String newV2LedgerDirectory() throws Exception { String d = newV1LedgerDirectory(); createVersion2File(d); return d; } private static void testUpgradeProceedure(String zkServers, String journalDir, String ledgerDir) throws Exception { ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkServers) .setJournalDirName(journalDir) .setLedgerDirNames(new String[] { ledgerDir }) .setBookiePort(bookiePort); Bookie b = null; try { b = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException e) { // correct behaviour assertTrue("wrong exception", e.getMessage().contains("upgrade needed")); } FileSystemUpgrade.upgrade(conf); // should work fine b = new Bookie(conf); b.start(); b.shutdown(); b = null; FileSystemUpgrade.rollback(conf); try { b = new Bookie(conf); fail("Shouldn't have been able to start"); } catch (BookieException.InvalidCookieException e) { // correct behaviour assertTrue("wrong exception", e.getMessage().contains("upgrade needed")); } FileSystemUpgrade.upgrade(conf); FileSystemUpgrade.finalizeUpgrade(conf); b = new Bookie(conf); b.start(); b.shutdown(); b = null; } @Test(timeout=60000) public void testUpgradeV1toCurrent() throws Exception { String journalDir = newV1JournalDirectory(); String ledgerDir = newV1LedgerDirectory(); testUpgradeProceedure(zkutil.getZooKeeperConnectString(), journalDir, ledgerDir); } @Test(timeout=60000) public void testUpgradeV2toCurrent() throws Exception { String journalDir = newV2JournalDirectory(); String ledgerDir = newV2LedgerDirectory(); testUpgradeProceedure(zkutil.getZooKeeperConnectString(), journalDir, ledgerDir); } @Test(timeout=60000) public void testUpgradeCurrent() throws Exception { String journalDir = newV2JournalDirectory(); String ledgerDir = newV2LedgerDirectory(); testUpgradeProceedure(zkutil.getZooKeeperConnectString(), journalDir, ledgerDir); // Upgrade again ServerConfiguration conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkutil.getZooKeeperConnectString()) .setJournalDirName(journalDir) .setLedgerDirNames(new String[] { ledgerDir }) .setBookiePort(bookiePort); FileSystemUpgrade.upgrade(conf); // should work fine with current directory Bookie b = new Bookie(conf); b.start(); b.shutdown(); } @Test(timeout=60000) public void testCommandLine() throws Exception { PrintStream origerr = System.err; PrintStream origout = System.out; File output = File.createTempFile("bookie", "stdout"); File erroutput = File.createTempFile("bookie", "stderr"); System.setOut(new PrintStream(output)); System.setErr(new PrintStream(erroutput)); try { FileSystemUpgrade.main(new String[] { "-h" }); try { // test without conf FileSystemUpgrade.main(new String[] { "-u" }); fail("Should have failed"); } catch (IllegalArgumentException iae) { assertTrue("Wrong exception " + iae.getMessage(), iae.getMessage().contains("without configuration")); } File f = File.createTempFile("bookie", "tmpconf"); try { // test without upgrade op FileSystemUpgrade.main(new String[] { "--conf", f.getPath() }); fail("Should have failed"); } catch (IllegalArgumentException iae) { assertTrue("Wrong exception " + iae.getMessage(), iae.getMessage().contains("Must specify -upgrade")); } } finally { System.setOut(origout); System.setErr(origerr); } } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/000077500000000000000000000000001244507361200310265ustar00rootroot00000000000000BookKeeperTest.java000066400000000000000000000212311244507361200344770ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.test.BaseTestCase; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.KeeperException; import org.junit.Assert; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests of the main BookKeeper client */ public class BookKeeperTest extends BaseTestCase { static Logger LOG = LoggerFactory.getLogger(BookKeeperTest.class); DigestType digestType; public BookKeeperTest(DigestType digestType) { super(4); this.digestType = digestType; } @Test(timeout=60000) public void testConstructionZkDelay() throws Exception { ClientConfiguration conf = new ClientConfiguration() .setZkServers(zkUtil.getZooKeeperConnectString()) .setZkTimeout(20000); CountDownLatch l = new CountDownLatch(1); zkUtil.sleepServer(5, l); l.await(); BookKeeper bkc = new BookKeeper(conf); bkc.createLedger(digestType, "testPasswd".getBytes()).close(); bkc.close(); } @Test(timeout=60000) public void testConstructionNotConnectedExplicitZk() throws Exception { ClientConfiguration conf = new ClientConfiguration() .setZkServers(zkUtil.getZooKeeperConnectString()) .setZkTimeout(20000); CountDownLatch l = new CountDownLatch(1); zkUtil.sleepServer(5, l); l.await(); ZooKeeper zk = new ZooKeeper(zkUtil.getZooKeeperConnectString(), 10000, new Watcher() { @Override public void process(WatchedEvent event) { } }); assertFalse("ZK shouldn't have connected yet", zk.getState().isConnected()); try { BookKeeper bkc = new BookKeeper(conf, zk); fail("Shouldn't be able to construct with unconnected zk"); } catch (KeeperException.ConnectionLossException cle) { // correct behaviour } } /** * Test that bookkeeper is not able to open ledgers if * it provides the wrong password or wrong digest */ @Test(timeout=60000) public void testBookkeeperPassword() throws Exception { ClientConfiguration conf = new ClientConfiguration() .setZkServers(zkUtil.getZooKeeperConnectString()); BookKeeper bkc = new BookKeeper(conf); DigestType digestCorrect = digestType; byte[] passwdCorrect = "AAAAAAA".getBytes(); DigestType digestBad = digestType == DigestType.MAC ? DigestType.CRC32 : DigestType.MAC; byte[] passwdBad = "BBBBBBB".getBytes(); LedgerHandle lh = null; try { lh = bkc.createLedger(digestCorrect, passwdCorrect); long id = lh.getId(); for (int i = 0; i < 100; i++) { lh.addEntry("foobar".getBytes()); } lh.close(); // try open with bad passwd try { bkc.openLedger(id, digestCorrect, passwdBad); fail("Shouldn't be able to open with bad passwd"); } catch (BKException.BKUnauthorizedAccessException bke) { // correct behaviour } // try open with bad digest try { bkc.openLedger(id, digestBad, passwdCorrect); fail("Shouldn't be able to open with bad digest"); } catch (BKException.BKDigestMatchException bke) { // correct behaviour } // try open with both bad try { bkc.openLedger(id, digestBad, passwdBad); fail("Shouldn't be able to open with bad passwd and digest"); } catch (BKException.BKUnauthorizedAccessException bke) { // correct behaviour } // try open with both correct bkc.openLedger(id, digestCorrect, passwdCorrect).close(); } finally { if (lh != null) { lh.close(); } bkc.close(); } } /** * Tests that when trying to use a closed BK client object we get * a callback error and not an InterruptedException. * @throws Exception */ @Test(timeout=60000) public void testAsyncReadWithError() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, DigestType.CRC32, "testPasswd".getBytes()); bkc.close(); final AtomicInteger result = new AtomicInteger(0); final CountDownLatch counter = new CountDownLatch(1); // Try to write, we shoud get and error callback but not an exception lh.asyncAddEntry("test".getBytes(), new AddCallback() { public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { result.set(rc); counter.countDown(); } }, null); counter.await(); Assert.assertTrue(result.get() != 0); } /** * Test that bookkeeper will close cleanly if close is issued * while another operation is in progress. */ @Test(timeout=60000) public void testCloseDuringOp() throws Exception { ClientConfiguration conf = new ClientConfiguration() .setZkServers(zkUtil.getZooKeeperConnectString()); for (int i = 0; i < 100; i++) { final BookKeeper client = new BookKeeper(conf); final CountDownLatch l = new CountDownLatch(1); final AtomicBoolean success = new AtomicBoolean(false); Thread t = new Thread() { public void run() { try { LedgerHandle lh = client.createLedger(3, 3, digestType, "testPasswd".getBytes()); startNewBookie(); killBookie(0); lh.asyncAddEntry("test".getBytes(), new AddCallback() { @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { // noop, we don't care if this completes } }, null); client.close(); success.set(true); l.countDown(); } catch (Exception e) { LOG.error("Error running test", e); success.set(false); l.countDown(); } } }; t.start(); assertTrue("Close never completed", l.await(10, TimeUnit.SECONDS)); assertTrue("Close was not successful", success.get()); } } @Test(timeout=60000) public void testIsClosed() throws Exception { ClientConfiguration conf = new ClientConfiguration() .setZkServers(zkUtil.getZooKeeperConnectString()); BookKeeper bkc = new BookKeeper(conf); LedgerHandle lh = bkc.createLedger(digestType, "testPasswd".getBytes()); Long lId = lh.getId(); lh.addEntry("000".getBytes()); boolean result = bkc.isClosed(lId); Assert.assertTrue("Ledger shouldn't be flagged as closed!",!result); lh.close(); result = bkc.isClosed(lId); Assert.assertTrue("Ledger should be flagged as closed!",result); bkc.close(); } } BookKeeperTestClient.java000066400000000000000000000046511244507361200356450ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.IOException; import java.util.concurrent.Executors; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.AsyncCallback.CreateCallback; import org.apache.bookkeeper.client.AsyncCallback.DeleteCallback; import org.apache.bookkeeper.client.AsyncCallback.OpenCallback; import org.apache.bookkeeper.client.BKException.Code; import org.apache.bookkeeper.proto.BookieClient; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; /** * Test BookKeeperClient which allows access to members we don't * wish to expose in the public API. */ public class BookKeeperTestClient extends BookKeeper { public BookKeeperTestClient(ClientConfiguration conf) throws IOException, InterruptedException, KeeperException { super(conf); } public ZooKeeper getZkHandle() { return super.getZkHandle(); } public ClientConfiguration getConf() { return super.getConf(); } /** * Force a read to zookeeper to get list of bookies. * * @throws InterruptedException * @throws KeeperException */ public void readBookiesBlocking() throws InterruptedException, KeeperException { bookieWatcher.readBookiesBlocking(); } } BookieRecoveryTest.java000066400000000000000000001141371244507361200354100ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.IOException; import java.net.InetAddress; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Enumeration; import java.util.List; import java.util.Map; import java.util.HashSet; import java.util.HashMap; import java.util.Collections; import java.util.Random; import org.jboss.netty.buffer.ChannelBuffer; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.test.MultiLedgerManagerMultiDigestTestCase; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.MSLedgerManagerFactory; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.ReadEntryCallback; import org.apache.bookkeeper.client.AsyncCallback.RecoverCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.After; import org.junit.Before; import org.junit.Test; /** * This class tests the bookie recovery admin functionality. */ public class BookieRecoveryTest extends MultiLedgerManagerMultiDigestTestCase { static Logger LOG = LoggerFactory.getLogger(BookieRecoveryTest.class); // Object used for synchronizing async method calls class SyncObject { boolean value; public SyncObject() { value = false; } } // Object used for implementing the Bookie RecoverCallback for this jUnit // test. This verifies that the operation completed successfully. class BookieRecoverCallback implements RecoverCallback { boolean success = false; @Override public void recoverComplete(int rc, Object ctx) { LOG.info("Recovered bookie operation completed with rc: " + rc); success = rc == BKException.Code.OK; SyncObject sync = (SyncObject) ctx; synchronized (sync) { sync.value = true; sync.notify(); } } } // Objects to use for this jUnit test. DigestType digestType; String ledgerManagerFactory; SyncObject sync; BookieRecoverCallback bookieRecoverCb; BookKeeperAdmin bkAdmin; // Constructor public BookieRecoveryTest(String ledgerManagerFactory, DigestType digestType) { super(3); this.digestType = digestType; this.ledgerManagerFactory = ledgerManagerFactory; LOG.info("Using ledger manager " + ledgerManagerFactory); // set ledger manager baseConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); baseClientConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); } @Before @Override public void setUp() throws Exception { // Set up the configuration properties needed. baseClientConf.setBookieRecoveryDigestType(digestType); baseClientConf.setBookieRecoveryPasswd("".getBytes()); super.setUp(); sync = new SyncObject(); bookieRecoverCb = new BookieRecoverCallback(); ClientConfiguration adminConf = new ClientConfiguration(baseClientConf); adminConf.setZkServers(zkUtil.getZooKeeperConnectString()); bkAdmin = new BookKeeperAdmin(adminConf); } @After @Override public void tearDown() throws Exception { // Release any resources used by the BookKeeperTools instance. if(bkAdmin != null){ bkAdmin.close(); } super.tearDown(); } /** * Helper method to create a number of ledgers * * @param numLedgers * Number of ledgers to create * @return List of LedgerHandles for each of the ledgers created */ private List createLedgers(int numLedgers) throws BKException, IOException, InterruptedException { return createLedgers(numLedgers, 3, 2); } /** * Helper method to create a number of ledgers * * @param numLedgers * Number of ledgers to create * @param ensemble Ensemble size for ledgers * @param quorum Quorum size for ledgers * @return List of LedgerHandles for each of the ledgers created */ private List createLedgers(int numLedgers, int ensemble, int quorum) throws BKException, IOException, InterruptedException { List lhs = new ArrayList(); for (int i = 0; i < numLedgers; i++) { lhs.add(bkc.createLedger(ensemble, quorum, digestType, baseClientConf.getBookieRecoveryPasswd())); } return lhs; } private List openLedgers(List oldLhs) throws Exception { List newLhs = new ArrayList(); for (LedgerHandle oldLh : oldLhs) { newLhs.add(bkc.openLedger(oldLh.getId(), digestType, baseClientConf.getBookieRecoveryPasswd())); } return newLhs; } /** * Helper method to write dummy ledger entries to all of the ledgers passed. * * @param numEntries * Number of ledger entries to write for each ledger * @param startEntryId * The first entry Id we're expecting to write for each ledger * @param lhs * List of LedgerHandles for all ledgers to write entries to * @throws BKException * @throws InterruptedException */ private void writeEntriestoLedgers(int numEntries, long startEntryId, List lhs) throws BKException, InterruptedException { for (LedgerHandle lh : lhs) { for (int i = 0; i < numEntries; i++) { lh.addEntry(("LedgerId: " + lh.getId() + ", EntryId: " + (startEntryId + i)).getBytes()); } } } private void closeLedgers(List lhs) throws BKException, InterruptedException { for (LedgerHandle lh : lhs) { lh.close(); } } /** * Helper method to verify that we can read the recovered ledger entries. * * @param oldLhs * Old Ledger Handles * @param startEntryId * Start Entry Id to read * @param endEntryId * End Entry Id to read * @throws BKException * @throws InterruptedException */ private void verifyRecoveredLedgers(List oldLhs, long startEntryId, long endEntryId) throws BKException, InterruptedException { // Get a set of LedgerHandles for all of the ledgers to verify List lhs = new ArrayList(); for (int i = 0; i < oldLhs.size(); i++) { lhs.add(bkc.openLedger(oldLhs.get(i).getId(), digestType, baseClientConf.getBookieRecoveryPasswd())); } // Read the ledger entries to verify that they are all present and // correct in the new bookie. for (LedgerHandle lh : lhs) { Enumeration entries = lh.readEntries(startEntryId, endEntryId); while (entries.hasMoreElements()) { LedgerEntry entry = entries.nextElement(); assertTrue(new String(entry.getEntry()).equals("LedgerId: " + entry.getLedgerId() + ", EntryId: " + entry.getEntryId())); } } } /** * This tests the bookie recovery functionality with ensemble changes. * We'll verify that: * - bookie recovery should not affect ensemble change. * - ensemble change should not erase changes made by recovery. * * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-667} */ @Test(timeout = 60000) public void testMetadataConflictWithRecovery() throws Exception { int numEntries = 10; byte[] data = "testMetadataConflictWithRecovery".getBytes(); LedgerHandle lh = bkc.createLedger(2, 2, digestType, baseClientConf.getBookieRecoveryPasswd()); for (int i = 0; i < numEntries; i++) { lh.addEntry(data); } InetSocketAddress bookieToKill = lh.getLedgerMetadata().getEnsemble(numEntries - 1).get(1); killBookie(bookieToKill); startNewBookie(); for (int i = 0; i < numEntries; i++) { lh.addEntry(data); } bkAdmin.recoverBookieData(bookieToKill, null); // fail another bookie to cause ensemble change again bookieToKill = lh.getLedgerMetadata().getEnsemble(2 * numEntries - 1).get(1); ServerConfiguration confOfKilledBookie = killBookie(bookieToKill); startNewBookie(); for (int i = 0; i < numEntries; i++) { lh.addEntry(data); } // start the killed bookie again bsConfs.add(confOfKilledBookie); bs.add(startBookie(confOfKilledBookie)); // all ensembles should be fully replicated since it is recovered assertTrue("Not fully replicated", verifyFullyReplicated(lh, 3 * numEntries)); lh.close(); } /** * This tests the asynchronous bookie recovery functionality by writing * entries into 3 bookies, killing one bookie, starting up a new one to * replace it, and then recovering the ledger entries from the killed bookie * onto the new one. We'll verify that the entries stored on the killed * bookie are properly copied over and restored onto the new one. * * @throws Exception */ @Test(timeout=60000) public void testAsyncBookieRecoveryToSpecificBookie() throws Exception { // Create the ledgers int numLedgers = 3; List lhs = createLedgers(numLedgers); // Write the entries for the ledgers with dummy values. int numMsgs = 10; writeEntriestoLedgers(numMsgs, 0, lhs); // Shutdown the first bookie server LOG.info("Finished writing all ledger entries so shutdown one of the bookies."); int initialPort = bsConfs.get(0).getBookiePort(); bs.get(0).shutdown(); bs.remove(0); // Startup a new bookie server int newBookiePort = startNewBookie(); // Write some more entries for the ledgers so a new ensemble will be // created for them. writeEntriestoLedgers(numMsgs, 10, lhs); // Call the async recover bookie method. InetSocketAddress bookieSrc = new InetSocketAddress(InetAddress.getLocalHost().getHostAddress(), initialPort); InetSocketAddress bookieDest = new InetSocketAddress(InetAddress.getLocalHost().getHostAddress(), newBookiePort); LOG.info("Now recover the data on the killed bookie (" + bookieSrc + ") and replicate it to the new one (" + bookieDest + ")"); // Initiate the sync object sync.value = false; bkAdmin.asyncRecoverBookieData(bookieSrc, bookieDest, bookieRecoverCb, sync); // Wait for the async method to complete. synchronized (sync) { while (sync.value == false) { sync.wait(); } assertTrue(bookieRecoverCb.success); } // Verify the recovered ledger entries are okay. verifyRecoveredLedgers(lhs, 0, 2 * numMsgs - 1); } /** * This tests the asynchronous bookie recovery functionality by writing * entries into 3 bookies, killing one bookie, starting up a few new * bookies, and then recovering the ledger entries from the killed bookie * onto random available bookie servers. We'll verify that the entries * stored on the killed bookie are properly copied over and restored onto * the other bookies. * * @throws Exception */ @Test(timeout=60000) public void testAsyncBookieRecoveryToRandomBookies() throws Exception { // Create the ledgers int numLedgers = 3; List lhs = createLedgers(numLedgers); // Write the entries for the ledgers with dummy values. int numMsgs = 10; writeEntriestoLedgers(numMsgs, 0, lhs); // Shutdown the first bookie server LOG.info("Finished writing all ledger entries so shutdown one of the bookies."); int initialPort = bsConfs.get(0).getBookiePort(); bs.get(0).shutdown(); bs.remove(0); // Startup three new bookie servers for (int i = 0; i < 3; i++) { startNewBookie(); } // Write some more entries for the ledgers so a new ensemble will be // created for them. writeEntriestoLedgers(numMsgs, 10, lhs); // Call the async recover bookie method. InetSocketAddress bookieSrc = new InetSocketAddress(InetAddress.getLocalHost().getHostAddress(), initialPort); InetSocketAddress bookieDest = null; LOG.info("Now recover the data on the killed bookie (" + bookieSrc + ") and replicate it to a random available one"); // Initiate the sync object sync.value = false; bkAdmin.asyncRecoverBookieData(bookieSrc, bookieDest, bookieRecoverCb, sync); // Wait for the async method to complete. synchronized (sync) { while (sync.value == false) { sync.wait(); } assertTrue(bookieRecoverCb.success); } // Verify the recovered ledger entries are okay. verifyRecoveredLedgers(lhs, 0, 2 * numMsgs - 1); } /** * This tests the synchronous bookie recovery functionality by writing * entries into 3 bookies, killing one bookie, starting up a new one to * replace it, and then recovering the ledger entries from the killed bookie * onto the new one. We'll verify that the entries stored on the killed * bookie are properly copied over and restored onto the new one. * * @throws Exception */ @Test(timeout=60000) public void testSyncBookieRecoveryToSpecificBookie() throws Exception { // Create the ledgers int numLedgers = 3; List lhs = createLedgers(numLedgers); // Write the entries for the ledgers with dummy values. int numMsgs = 10; writeEntriestoLedgers(numMsgs, 0, lhs); // Shutdown the first bookie server LOG.info("Finished writing all ledger entries so shutdown one of the bookies."); int initialPort = bsConfs.get(0).getBookiePort(); bs.get(0).shutdown(); bs.remove(0); // Startup a new bookie server int newBookiePort = startNewBookie(); // Write some more entries for the ledgers so a new ensemble will be // created for them. writeEntriestoLedgers(numMsgs, 10, lhs); // Call the sync recover bookie method. InetSocketAddress bookieSrc = new InetSocketAddress(InetAddress.getLocalHost().getHostAddress(), initialPort); InetSocketAddress bookieDest = new InetSocketAddress(InetAddress.getLocalHost().getHostAddress(), newBookiePort); LOG.info("Now recover the data on the killed bookie (" + bookieSrc + ") and replicate it to the new one (" + bookieDest + ")"); bkAdmin.recoverBookieData(bookieSrc, bookieDest); // Verify the recovered ledger entries are okay. verifyRecoveredLedgers(lhs, 0, 2 * numMsgs - 1); } /** * This tests the synchronous bookie recovery functionality by writing * entries into 3 bookies, killing one bookie, starting up a few new * bookies, and then recovering the ledger entries from the killed bookie * onto random available bookie servers. We'll verify that the entries * stored on the killed bookie are properly copied over and restored onto * the other bookies. * * @throws Exception */ @Test(timeout=60000) public void testSyncBookieRecoveryToRandomBookies() throws Exception { // Create the ledgers int numLedgers = 3; List lhs = createLedgers(numLedgers); // Write the entries for the ledgers with dummy values. int numMsgs = 10; writeEntriestoLedgers(numMsgs, 0, lhs); // Shutdown the first bookie server LOG.info("Finished writing all ledger entries so shutdown one of the bookies."); int initialPort = bsConfs.get(0).getBookiePort(); bs.get(0).shutdown(); bs.remove(0); // Startup three new bookie servers for (int i = 0; i < 3; i++) { startNewBookie(); } // Write some more entries for the ledgers so a new ensemble will be // created for them. writeEntriestoLedgers(numMsgs, 10, lhs); // Call the sync recover bookie method. InetSocketAddress bookieSrc = new InetSocketAddress(InetAddress.getLocalHost().getHostAddress(), initialPort); InetSocketAddress bookieDest = null; LOG.info("Now recover the data on the killed bookie (" + bookieSrc + ") and replicate it to a random available one"); bkAdmin.recoverBookieData(bookieSrc, bookieDest); // Verify the recovered ledger entries are okay. verifyRecoveredLedgers(lhs, 0, 2 * numMsgs - 1); } private static class ReplicationVerificationCallback implements ReadEntryCallback { final CountDownLatch latch; final AtomicLong numSuccess; ReplicationVerificationCallback(int numRequests) { latch = new CountDownLatch(numRequests); numSuccess = new AtomicLong(0); } @Override public void readEntryComplete(int rc, long ledgerId, long entryId, ChannelBuffer buffer, Object ctx) { if (LOG.isDebugEnabled()) { InetSocketAddress addr = (InetSocketAddress)ctx; LOG.debug("Got " + rc + " for ledger " + ledgerId + " entry " + entryId + " from " + ctx); } if (rc == BKException.Code.OK) { numSuccess.incrementAndGet(); } latch.countDown(); } long await() throws InterruptedException { if (latch.await(60, TimeUnit.SECONDS) == false) { LOG.warn("Didn't get all responses in verification"); return 0; } else { return numSuccess.get(); } } } private boolean verifyFullyReplicated(LedgerHandle lh, long untilEntry) throws Exception { LedgerMetadata md = getLedgerMetadata(lh); Map> ensembles = md.getEnsembles(); HashMap ranges = new HashMap(); ArrayList keyList = Collections.list( Collections.enumeration(ensembles.keySet())); Collections.sort(keyList); for (int i = 0; i < keyList.size() - 1; i++) { ranges.put(keyList.get(i), keyList.get(i+1)); } ranges.put(keyList.get(keyList.size()-1), untilEntry); for (Map.Entry> e : ensembles.entrySet()) { int quorum = md.getAckQuorumSize(); long startEntryId = e.getKey(); long endEntryId = ranges.get(startEntryId); long expectedSuccess = quorum*(endEntryId-startEntryId); int numRequests = e.getValue().size()*((int)(endEntryId-startEntryId)); ReplicationVerificationCallback cb = new ReplicationVerificationCallback(numRequests); for (long i = startEntryId; i < endEntryId; i++) { for (InetSocketAddress addr : e.getValue()) { bkc.bookieClient.readEntry(addr, lh.getId(), i, cb, addr); } } long numSuccess = cb.await(); if (numSuccess < expectedSuccess) { LOG.warn("Fragment not fully replicated ledgerId = " + lh.getId() + " startEntryId = " + startEntryId + " endEntryId = " + endEntryId + " expectedSuccess = " + expectedSuccess + " gotSuccess = " + numSuccess); return false; } } return true; } // Object used for synchronizing async method calls class SyncLedgerMetaObject { boolean value; int rc; LedgerMetadata meta; public SyncLedgerMetaObject() { value = false; meta = null; } } private LedgerMetadata getLedgerMetadata(LedgerHandle lh) throws Exception { final SyncLedgerMetaObject syncObj = new SyncLedgerMetaObject(); bkc.getLedgerManager().readLedgerMetadata(lh.getId(), new GenericCallback() { @Override public void operationComplete(int rc, LedgerMetadata result) { synchronized (syncObj) { syncObj.rc = rc; syncObj.meta = result; syncObj.value = true; syncObj.notify(); } } }); synchronized (syncObj) { while (syncObj.value == false) { syncObj.wait(); } } assertEquals(BKException.Code.OK, syncObj.rc); return syncObj.meta; } private boolean findDupesInEnsembles(List lhs) throws Exception { long numDupes = 0; for (LedgerHandle lh : lhs) { LedgerMetadata md = getLedgerMetadata(lh); for (Map.Entry> e : md.getEnsembles().entrySet()) { HashSet set = new HashSet(); long fragment = e.getKey(); for (InetSocketAddress addr : e.getValue()) { if (set.contains(addr)) { LOG.error("Dupe " + addr + " found in ensemble for fragment " + fragment + " of ledger " + lh.getId()); numDupes++; } set.add(addr); } } } return numDupes > 0; } /** * Test recoverying the closed ledgers when the failed bookie server is in the last ensemble */ @Test(timeout=60000) public void testBookieRecoveryOnClosedLedgers() throws Exception { // Create the ledgers int numLedgers = 3; List lhs = createLedgers(numLedgers, numBookies, 2); // Write the entries for the ledgers with dummy values int numMsgs = 10; writeEntriestoLedgers(numMsgs, 0, lhs); closeLedgers(lhs); // Shutdown last bookie server in last ensemble ArrayList lastEnsemble = lhs.get(0).getLedgerMetadata().getEnsembles() .entrySet().iterator().next().getValue(); InetSocketAddress bookieToKill = lastEnsemble.get(lastEnsemble.size() - 1); killBookie(bookieToKill); // start a new bookie startNewBookie(); InetSocketAddress bookieDest = null; LOG.info("Now recover the data on the killed bookie (" + bookieToKill + ") and replicate it to a random available one"); bkAdmin.recoverBookieData(bookieToKill, bookieDest); for (LedgerHandle lh : lhs) { assertTrue("Not fully replicated", verifyFullyReplicated(lh, numMsgs)); lh.close(); } } @Test(timeout=60000) public void testBookieRecoveryOnOpenedLedgers() throws Exception { // Create the ledgers int numLedgers = 3; List lhs = createLedgers(numLedgers, numBookies, 2); // Write the entries for the ledgers with dummy values int numMsgs = 10; writeEntriestoLedgers(numMsgs, 0, lhs); // Shutdown the first bookie server ArrayList lastEnsemble = lhs.get(0).getLedgerMetadata().getEnsembles() .entrySet().iterator().next().getValue(); InetSocketAddress bookieToKill = lastEnsemble.get(lastEnsemble.size() - 1); killBookie(bookieToKill); // start a new bookie startNewBookie(); InetSocketAddress bookieDest = null; LOG.info("Now recover the data on the killed bookie (" + bookieToKill + ") and replicate it to a random available one"); bkAdmin.recoverBookieData(bookieToKill, bookieDest); for (LedgerHandle lh : lhs) { assertTrue("Not fully replicated", verifyFullyReplicated(lh, numMsgs)); } try { // we can't write entries writeEntriestoLedgers(numMsgs, 0, lhs); fail("should not reach here"); } catch (Exception e) { } } @Test(timeout=60000) public void testBookieRecoveryOnInRecoveryLedger() throws Exception { int numMsgs = 10; // Create the ledgers int numLedgers = 1; List lhs = createLedgers(numLedgers, 2, 2); // Write the entries for the ledgers with dummy values writeEntriestoLedgers(numMsgs, 0, lhs); // Shutdown the first bookie server ArrayList lastEnsemble = lhs.get(0).getLedgerMetadata().getEnsembles() .entrySet().iterator().next().getValue(); // removed bookie InetSocketAddress bookieToKill = lastEnsemble.get(0); killBookie(bookieToKill); // temp failure InetSocketAddress bookieToKill2 = lastEnsemble.get(1); ServerConfiguration conf2 = killBookie(bookieToKill2); // start a new bookie startNewBookie(); // open these ledgers for (LedgerHandle oldLh : lhs) { try { bkc.openLedger(oldLh.getId(), digestType, baseClientConf.getBookieRecoveryPasswd()); fail("Should have thrown exception"); } catch (Exception e) { } } try { bkAdmin.recoverBookieData(bookieToKill, null); fail("Should have thrown exception"); } catch (BKException.BKLedgerRecoveryException bke) { // correct behaviour } // restart failed bookie bs.add(startBookie(conf2)); bsConfs.add(conf2); // recover them bkAdmin.recoverBookieData(bookieToKill, null); for (LedgerHandle lh : lhs) { assertTrue("Not fully replicated", verifyFullyReplicated(lh, numMsgs)); } // open ledgers to read metadata List newLhs = openLedgers(lhs); for (LedgerHandle newLh : newLhs) { // first ensemble should contains bookieToKill2 and not contain bookieToKill Map.Entry> entry = newLh.getLedgerMetadata().getEnsembles().entrySet().iterator().next(); assertFalse(entry.getValue().contains(bookieToKill)); assertTrue(entry.getValue().contains(bookieToKill2)); } } @Test(timeout=60000) public void testAsyncBookieRecoveryToRandomBookiesNotEnoughBookies() throws Exception { // Create the ledgers int numLedgers = 3; List lhs = createLedgers(numLedgers, numBookies, 2); // Write the entries for the ledgers with dummy values. int numMsgs = 10; writeEntriestoLedgers(numMsgs, 0, lhs); // Shutdown the first bookie server LOG.info("Finished writing all ledger entries so shutdown one of the bookies."); int initialPort = bsConfs.get(0).getBookiePort(); bs.get(0).shutdown(); bs.remove(0); // Call the async recover bookie method. InetSocketAddress bookieSrc = new InetSocketAddress(InetAddress.getLocalHost().getHostAddress(), initialPort); InetSocketAddress bookieDest = null; LOG.info("Now recover the data on the killed bookie (" + bookieSrc + ") and replicate it to a random available one"); // Initiate the sync object sync.value = false; try { bkAdmin.recoverBookieData(bookieSrc, null); fail("Should have thrown exception"); } catch (BKException.BKLedgerRecoveryException bke) { // correct behaviour } } @Test(timeout=60000) public void testSyncBookieRecoveryToRandomBookiesCheckForDupes() throws Exception { Random r = new Random(); // Create the ledgers int numLedgers = 3; List lhs = createLedgers(numLedgers, numBookies, 2); // Write the entries for the ledgers with dummy values. int numMsgs = 10; writeEntriestoLedgers(numMsgs, 0, lhs); // Shutdown the first bookie server LOG.info("Finished writing all ledger entries so shutdown one of the bookies."); int removeIndex = r.nextInt(bs.size()); InetSocketAddress bookieSrc = bs.get(removeIndex).getLocalAddress(); bs.get(removeIndex).shutdown(); bs.remove(removeIndex); // Startup new bookie server startNewBookie(); // Write some more entries for the ledgers so a new ensemble will be // created for them. writeEntriestoLedgers(numMsgs, numMsgs, lhs); // Call the async recover bookie method. LOG.info("Now recover the data on the killed bookie (" + bookieSrc + ") and replicate it to a random available one"); // Initiate the sync object sync.value = false; bkAdmin.recoverBookieData(bookieSrc, null); assertFalse("Dupes exist in ensembles", findDupesInEnsembles(lhs)); // Write some more entries to ensure fencing hasn't broken stuff writeEntriestoLedgers(numMsgs, numMsgs*2, lhs); for (LedgerHandle lh : lhs) { assertTrue("Not fully replicated", verifyFullyReplicated(lh, numMsgs*3)); lh.close(); } } @Test(timeout=60000) public void recoverWithoutPasswordInConf() throws Exception { byte[] passwdCorrect = "AAAAAA".getBytes(); byte[] passwdBad = "BBBBBB".getBytes(); DigestType digestCorrect = digestType; DigestType digestBad = (digestType == DigestType.MAC) ? DigestType.CRC32 : DigestType.MAC; LedgerHandle lh = bkc.createLedger(3, 2, digestCorrect, passwdCorrect); long ledgerId = lh.getId(); for (int i = 0; i < 100; i++) { lh.addEntry("foobar".getBytes()); } lh.close(); InetSocketAddress bookieSrc = bs.get(0).getLocalAddress(); bs.get(0).shutdown(); bs.remove(0); startNewBookie(); // Check that entries are missing lh = bkc.openLedgerNoRecovery(ledgerId, digestCorrect, passwdCorrect); assertFalse("Should be entries missing", verifyFullyReplicated(lh, 100)); lh.close(); // Try to recover with bad password in conf // This is fine, because it only falls back to the configured // password if the password info is missing from the metadata ClientConfiguration adminConf = new ClientConfiguration(); adminConf.setZkServers(zkUtil.getZooKeeperConnectString()); adminConf.setBookieRecoveryDigestType(digestCorrect); adminConf.setBookieRecoveryPasswd(passwdBad); setMetastoreImplClass(adminConf); BookKeeperAdmin bka = new BookKeeperAdmin(adminConf); bka.recoverBookieData(bookieSrc, null); bka.close(); lh = bkc.openLedgerNoRecovery(ledgerId, digestCorrect, passwdCorrect); assertTrue("Should be back to fully replication", verifyFullyReplicated(lh, 100)); lh.close(); bookieSrc = bs.get(0).getLocalAddress(); bs.get(0).shutdown(); bs.remove(0); startNewBookie(); // Check that entries are missing lh = bkc.openLedgerNoRecovery(ledgerId, digestCorrect, passwdCorrect); assertFalse("Should be entries missing", verifyFullyReplicated(lh, 100)); lh.close(); // Try to recover with no password in conf adminConf = new ClientConfiguration(); adminConf.setZkServers(zkUtil.getZooKeeperConnectString()); setMetastoreImplClass(adminConf); bka = new BookKeeperAdmin(adminConf); bka.recoverBookieData(bookieSrc, null); bka.close(); lh = bkc.openLedgerNoRecovery(ledgerId, digestCorrect, passwdCorrect); assertTrue("Should be back to fully replication", verifyFullyReplicated(lh, 100)); lh.close(); } /** * Test that when we try to recover a ledger which doesn't have * the password stored in the configuration, we don't succeed */ @Test(timeout=60000) public void ensurePasswordUsedForOldLedgers() throws Exception { // This test bases on creating old ledgers in version 4.1.0, which only // supports ZooKeeper based flat and hierarchical LedgerManagerFactory. // So we ignore it for MSLedgerManagerFactory. if (MSLedgerManagerFactory.class.getName().equals(ledgerManagerFactory)) { return; } // stop all bookies // and wipe the ledger layout so we can use an old client zkUtil.getZooKeeperClient().delete("/ledgers/LAYOUT", -1); byte[] passwdCorrect = "AAAAAA".getBytes(); byte[] passwdBad = "BBBBBB".getBytes(); DigestType digestCorrect = digestType; DigestType digestBad = digestCorrect == DigestType.MAC ? DigestType.CRC32 : DigestType.MAC; org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper.DigestType digestCorrect410 = org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper.DigestType.valueOf(digestType.toString()); org.apache.bk_v4_1_0.bookkeeper.conf.ClientConfiguration c = new org.apache.bk_v4_1_0.bookkeeper.conf.ClientConfiguration(); c.setZkServers(zkUtil.getZooKeeperConnectString()) .setLedgerManagerType( ledgerManagerFactory.equals("org.apache.bookkeeper.meta.FlatLedgerManagerFactory") ? "flat" : "hierarchical"); // create client to set up layout, close it, restart bookies, and open a new client. // the new client is necessary to ensure that it has all the restarted bookies in the // its available bookie list org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper bkc41 = new org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper(c); bkc41.close(); restartBookies(); bkc41 = new org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper(c); org.apache.bk_v4_1_0.bookkeeper.client.LedgerHandle lh41 = bkc41.createLedger(3, 2, digestCorrect410, passwdCorrect); long ledgerId = lh41.getId(); for (int i = 0; i < 100; i++) { lh41.addEntry("foobar".getBytes()); } lh41.close(); bkc41.close(); // Startup a new bookie server int newBookiePort = startNewBookie(); int removeIndex = 0; InetSocketAddress bookieSrc = bs.get(removeIndex).getLocalAddress(); bs.get(removeIndex).shutdown(); bs.remove(removeIndex); // Check that entries are missing LedgerHandle lh = bkc.openLedgerNoRecovery(ledgerId, digestCorrect, passwdCorrect); assertFalse("Should be entries missing", verifyFullyReplicated(lh, 100)); lh.close(); // Try to recover with bad password in conf // if the digest type is MAC // for CRC32, the password is only checked // when adding new entries, which recovery will // never do ClientConfiguration adminConf; BookKeeperAdmin bka; if (digestCorrect == DigestType.MAC) { adminConf = new ClientConfiguration(); adminConf.setZkServers(zkUtil.getZooKeeperConnectString()); adminConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); adminConf.setBookieRecoveryDigestType(digestCorrect); adminConf.setBookieRecoveryPasswd(passwdBad); bka = new BookKeeperAdmin(adminConf); try { bka.recoverBookieData(bookieSrc, null); fail("Shouldn't be able to recover with wrong password"); } catch (BKException bke) { // correct behaviour } finally { bka.close(); } } // Try to recover with bad digest in conf adminConf = new ClientConfiguration(); adminConf.setZkServers(zkUtil.getZooKeeperConnectString()); adminConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); adminConf.setBookieRecoveryDigestType(digestBad); adminConf.setBookieRecoveryPasswd(passwdCorrect); bka = new BookKeeperAdmin(adminConf); try { bka.recoverBookieData(bookieSrc, null); fail("Shouldn't be able to recover with wrong digest"); } catch (BKException bke) { // correct behaviour } finally { bka.close(); } // Check that entries are still missing lh = bkc.openLedgerNoRecovery(ledgerId, digestCorrect, passwdCorrect); assertFalse("Should be entries missing", verifyFullyReplicated(lh, 100)); lh.close(); adminConf.setBookieRecoveryDigestType(digestCorrect); adminConf.setBookieRecoveryPasswd(passwdCorrect); bka = new BookKeeperAdmin(adminConf); bka.recoverBookieData(bookieSrc, null); bka.close(); lh = bkc.openLedgerNoRecovery(ledgerId, digestCorrect, passwdCorrect); assertTrue("Should have recovered everything", verifyFullyReplicated(lh, 100)); lh.close(); } } BookieWriteLedgerTest.java000066400000000000000000000176341244507361200360330ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Enumeration; import java.util.Random; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.test.MultiLedgerManagerMultiDigestTestCase; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Testing ledger write entry cases */ public class BookieWriteLedgerTest extends MultiLedgerManagerMultiDigestTestCase implements AddCallback { private static Logger LOG = LoggerFactory .getLogger(BookieWriteLedgerTest.class); byte[] ledgerPassword = "aaa".getBytes(); LedgerHandle lh, lh2; Enumeration ls; // test related variables int numEntriesToWrite = 100; int maxInt = Integer.MAX_VALUE; Random rng; // Random Number Generator ArrayList entries1; // generated entries ArrayList entries2; // generated entries DigestType digestType; private static class SyncObj { volatile int counter; volatile int rc; public SyncObj() { counter = 0; } } @Override @Before public void setUp() throws Exception { super.setUp(); rng = new Random(System.currentTimeMillis()); // Initialize the Random // Number Generator entries1 = new ArrayList(); // initialize the entries list entries2 = new ArrayList(); // initialize the entries list } public BookieWriteLedgerTest(String ledgerManagerFactory, DigestType digestType) { super(5); this.digestType = digestType; // set ledger manager baseConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); baseClientConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); } /** * Verify write when few bookie failures in last ensemble and forcing * ensemble reformation */ @Test(timeout=60000) public void testWithMultipleBookieFailuresInLastEnsemble() throws Exception { // Create a ledger lh = bkc.createLedger(5, 4, digestType, ledgerPassword); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries1.add(entry.array()); lh.addEntry(entry.array()); } // Start three more bookies startNewBookie(); startNewBookie(); startNewBookie(); // Shutdown three bookies in the last ensemble and continue writing ArrayList ensemble = lh.getLedgerMetadata() .getEnsembles().entrySet().iterator().next().getValue(); killBookie(ensemble.get(0)); killBookie(ensemble.get(1)); killBookie(ensemble.get(2)); int i = numEntriesToWrite; numEntriesToWrite = numEntriesToWrite + 50; for (; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries1.add(entry.array()); lh.addEntry(entry.array()); } readEntries(lh, entries1); lh.close(); } /** * Verify asynchronous writing when few bookie failures in last ensemble. */ @Test(timeout=60000) public void testAsyncWritesWithMultipleFailuresInLastEnsemble() throws Exception { // Create ledgers lh = bkc.createLedger(5, 4, digestType, ledgerPassword); lh2 = bkc.createLedger(5, 4, digestType, ledgerPassword); LOG.info("Ledger ID-1: " + lh.getId()); LOG.info("Ledger ID-2: " + lh2.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries1.add(entry.array()); entries2.add(entry.array()); lh.addEntry(entry.array()); lh2.addEntry(entry.array()); } // Start three more bookies startNewBookie(); startNewBookie(); startNewBookie(); // Shutdown three bookies in the last ensemble and continue writing ArrayList ensemble = lh.getLedgerMetadata() .getEnsembles().entrySet().iterator().next().getValue(); killBookie(ensemble.get(0)); killBookie(ensemble.get(1)); killBookie(ensemble.get(2)); // adding one more entry to both the ledgers async after multiple bookie // failures. This will do asynchronously modifying the ledger metadata // simultaneously. numEntriesToWrite++; ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries1.add(entry.array()); entries2.add(entry.array()); SyncObj syncObj1 = new SyncObj(); SyncObj syncObj2 = new SyncObj(); lh.asyncAddEntry(entry.array(), this, syncObj1); lh2.asyncAddEntry(entry.array(), this, syncObj2); // wait for all entries to be acknowledged for the first ledger synchronized (syncObj1) { while (syncObj1.counter < 1) { LOG.debug("Entries counter = " + syncObj1.counter); syncObj1.wait(); } assertEquals(BKException.Code.OK, syncObj1.rc); } // wait for all entries to be acknowledged for the second ledger synchronized (syncObj2) { while (syncObj2.counter < 1) { LOG.debug("Entries counter = " + syncObj2.counter); syncObj2.wait(); } assertEquals(BKException.Code.OK, syncObj2.rc); } // reading ledger till the last entry readEntries(lh, entries1); readEntries(lh2, entries2); lh.close(); lh2.close(); } private void readEntries(LedgerHandle lh, ArrayList entries) throws InterruptedException, BKException { ls = lh.readEntries(0, numEntriesToWrite - 1); int index = 0; while (ls.hasMoreElements()) { ByteBuffer origbb = ByteBuffer.wrap(entries.get(index++)); Integer origEntry = origbb.getInt(); ByteBuffer result = ByteBuffer.wrap(ls.nextElement().getEntry()); LOG.debug("Length of result: " + result.capacity()); LOG.debug("Original entry: " + origEntry); Integer retrEntry = result.getInt(); LOG.debug("Retrieved entry: " + retrEntry); assertTrue("Checking entry " + index + " for equality", origEntry .equals(retrEntry)); } } @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { SyncObj x = (SyncObj) ctx; synchronized (x) { x.rc = rc; x.counter++; x.notify(); } } } ClientUtil.java000066400000000000000000000027301244507361200336700ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import org.jboss.netty.buffer.ChannelBuffer; public class ClientUtil { public static ChannelBuffer generatePacket(long ledgerId, long entryId, long lastAddConfirmed, long length, byte[] data) { CRC32DigestManager dm = new CRC32DigestManager(ledgerId); return dm.computeDigestAndPackageForSending(entryId, lastAddConfirmed, length, data, 0, data.length); } /** Returns that whether ledger is in open state */ public static boolean isLedgerOpen(LedgerHandle handle) { return !handle.metadata.isClosed(); } }LedgerCloseTest.java000066400000000000000000000260751244507361200346540ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.client; import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.List; import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.test.TestCallbacks.AddCallbackFuture; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import static org.junit.Assert.*; import static com.google.common.base.Charsets.UTF_8; /** * This class tests the ledger close logic. */ @SuppressWarnings("deprecation") public class LedgerCloseTest extends BookKeeperClusterTestCase { static Logger LOG = LoggerFactory.getLogger(LedgerCloseTest.class); static final int READ_TIMEOUT = 1; final DigestType digestType; public LedgerCloseTest() { super(6); this.digestType = DigestType.CRC32; // set timeout to a large value which disable it. baseClientConf.setReadTimeout(99999); baseConf.setGcWaitTime(999999); } @Test(timeout = 60000) public void testLedgerCloseWithConsistentLength() throws Exception { ClientConfiguration conf = new ClientConfiguration(); conf.setZkServers(zkUtil.getZooKeeperConnectString()).setReadTimeout(1); BookKeeper bkc = new BookKeeper(conf); LedgerHandle lh = bkc.createLedger(6, 3, DigestType.CRC32, new byte[] {}); final CountDownLatch latch = new CountDownLatch(1); stopBKCluster(); final AtomicInteger i = new AtomicInteger(0xdeadbeef); AsyncCallback.AddCallback cb = new AsyncCallback.AddCallback() { @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { i.set(rc); latch.countDown(); } }; lh.asyncAddEntry("Test Entry".getBytes(), cb, null); latch.await(); assertEquals(i.get(), BKException.Code.NotEnoughBookiesException); assertEquals(0, lh.getLength()); assertEquals(LedgerHandle.INVALID_ENTRY_ID, lh.getLastAddConfirmed()); LedgerHandle newLh = bkc.openLedger(lh.getId(), DigestType.CRC32, new byte[] {}); assertEquals(0, newLh.getLength()); assertEquals(LedgerHandle.INVALID_ENTRY_ID, newLh.getLastAddConfirmed()); } @Test(timeout = 60000) public void testLedgerCloseDuringUnrecoverableErrors() throws Exception { int numEntries = 3; LedgerHandle lh = bkc.createLedger(3, 3, 3, digestType, "".getBytes()); verifyMetadataConsistency(numEntries, lh); } @Test(timeout = 60000) public void testLedgerCheckerShouldnotSelectInvalidLastFragments() throws Exception { int numEntries = 10; LedgerHandle lh = bkc.createLedger(3, 3, 3, digestType, "".getBytes()); // Add some entries before bookie failures for (int i = 0; i < numEntries; i++) { lh.addEntry("data".getBytes()); } numEntries = 4; // add n*ensemleSize+1 entries async after bookies // failed. verifyMetadataConsistency(numEntries, lh); LedgerChecker checker = new LedgerChecker(bkc); CheckerCallback cb = new CheckerCallback(); checker.checkLedger(lh, cb); Set result = cb.waitAndGetResult(); assertEquals("No fragments should be selected", 0, result.size()); } class CheckerCallback implements GenericCallback> { private Set result = null; private CountDownLatch latch = new CountDownLatch(1); public void operationComplete(int rc, Set result) { this.result = result; latch.countDown(); } Set waitAndGetResult() throws InterruptedException { latch.await(); return result; } } private void verifyMetadataConsistency(int numEntries, LedgerHandle lh) throws Exception { final CountDownLatch addDoneLatch = new CountDownLatch(1); final CountDownLatch deadIOLatch = new CountDownLatch(1); final CountDownLatch recoverDoneLatch = new CountDownLatch(1); final CountDownLatch failedLatch = new CountDownLatch(1); // kill first bookie to replace with a unauthorize bookie InetSocketAddress bookie = lh.getLedgerMetadata().currentEnsemble.get(0); ServerConfiguration conf = killBookie(bookie); // replace a unauthorize bookie startUnauthorizedBookie(conf, addDoneLatch); // kill second bookie to replace with a dead bookie bookie = lh.getLedgerMetadata().currentEnsemble.get(1); conf = killBookie(bookie); // replace a slow dead bookie startDeadBookie(conf, deadIOLatch); // tried to add entries for (int i = 0; i < numEntries; i++) { lh.asyncAddEntry("data".getBytes(), new AddCallback() { @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { if (BKException.Code.OK != rc) { failedLatch.countDown(); deadIOLatch.countDown(); } if (0 == entryId) { try { recoverDoneLatch.await(); } catch (InterruptedException ie) { } } } }, null); } // add finished addDoneLatch.countDown(); // wait until entries failed due to UnauthorizedAccessException failedLatch.await(); // simulate the ownership of this ledger is transfer to another host (which is actually // what we did in Hedwig). LOG.info("Recover ledger {}.", lh.getId()); ClientConfiguration newConf = new ClientConfiguration(); newConf.addConfiguration(baseClientConf); BookKeeper newBkc = new BookKeeperTestClient(newConf.setReadTimeout(1)); LedgerHandle recoveredLh = newBkc.openLedger(lh.getId(), digestType, "".getBytes()); LOG.info("Recover ledger {} done.", lh.getId()); recoverDoneLatch.countDown(); // wait a bit until add operations failed from second bookie due to IOException TimeUnit.SECONDS.sleep(5); // open the ledger again to make sure we ge the right last confirmed. LedgerHandle newLh = newBkc.openLedger(lh.getId(), digestType, "".getBytes()); assertEquals("Metadata should be consistent across different opened ledgers", recoveredLh.getLastAddConfirmed(), newLh.getLastAddConfirmed()); } private void startUnauthorizedBookie(ServerConfiguration conf, final CountDownLatch latch) throws Exception { Bookie sBookie = new Bookie(conf) { @Override public void addEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { try { latch.await(); } catch (InterruptedException e) { } throw BookieException.create(BookieException.Code.UnauthorizedAccessException); } @Override public void recoveryAddEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { throw new IOException("Dead bookie for recovery adds."); } }; bsConfs.add(conf); bs.add(startBookie(conf, sBookie)); } // simulate slow adds, then become normal when recover, // so no ensemble change when recovering ledger on this bookie. private void startDeadBookie(ServerConfiguration conf, final CountDownLatch latch) throws Exception { Bookie dBookie = new Bookie(conf) { @Override public void addEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { try { latch.await(); } catch (InterruptedException e) { } // simulate slow adds. throw new IOException("Dead bookie"); } }; bsConfs.add(conf); bs.add(startBookie(conf, dBookie)); } @Test(timeout = 60000) public void testAllWritesAreCompletedOnClosedLedger() throws Exception { for (int i = 0; i < 100; i++) { LOG.info("Iteration {}", i); List futures = new ArrayList(); LedgerHandle w = bkc.createLedger(DigestType.CRC32, new byte[0]); AddCallbackFuture f = new AddCallbackFuture(0L); w.asyncAddEntry("foobar".getBytes(UTF_8), f, null); f.get(); LedgerHandle r = bkc.openLedger(w.getId(), DigestType.CRC32, new byte[0]); for (int j = 0; j < 100; j++) { AddCallbackFuture f1 = new AddCallbackFuture(1L + j); w.asyncAddEntry("foobar".getBytes(), f1, null); futures.add(f1); } for (AddCallbackFuture f2: futures) { try { f2.get(10, TimeUnit.SECONDS); } catch (ExecutionException ee) { // we don't care about errors } catch (TimeoutException te) { LOG.error("Error on waiting completing entry {} : ", f2.getExpectedEntryId(), te); fail("Should succeed on waiting completing entry " + f2.getExpectedEntryId()); } } } } } LedgerHandleAdapter.java000066400000000000000000000021771244507361200354400ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; /** * Adapter for tests to get the public access from LedgerHandle for its default * scope */ public class LedgerHandleAdapter { /** get the ledger handle */ public static LedgerMetadata getLedgerMetadata(LedgerHandle lh) { return lh.getLedgerMetadata(); } } LedgerRecoveryTest.java000066400000000000000000000377261244507361200354120ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.test.BaseTestCase; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This unit test tests ledger recovery. * */ public class LedgerRecoveryTest extends BaseTestCase { static Logger LOG = LoggerFactory.getLogger(LedgerRecoveryTest.class); DigestType digestType; public LedgerRecoveryTest(DigestType digestType) { super(3); this.digestType = digestType; } private void testInternal(int numEntries) throws Exception { /* * Create ledger. */ LedgerHandle beforelh = null; beforelh = bkc.createLedger(digestType, "".getBytes()); String tmp = "BookKeeper is cool!"; for (int i = 0; i < numEntries; i++) { beforelh.addEntry(tmp.getBytes()); } long length = (long) (numEntries * tmp.length()); /* * Try to open ledger. */ LedgerHandle afterlh = bkc.openLedger(beforelh.getId(), digestType, "".getBytes()); /* * Check if has recovered properly. */ assertTrue("Has not recovered correctly: " + afterlh.getLastAddConfirmed(), afterlh.getLastAddConfirmed() == numEntries - 1); assertTrue("Has not set the length correctly: " + afterlh.getLength() + ", " + length, afterlh.getLength() == length); } @Test(timeout=60000) public void testLedgerRecovery() throws Exception { testInternal(100); } @Test(timeout=60000) public void testEmptyLedgerRecoveryOne() throws Exception { testInternal(1); } @Test(timeout=60000) public void testEmptyLedgerRecovery() throws Exception { testInternal(0); } @Test(timeout=60000) public void testLedgerRecoveryWithWrongPassword() throws Exception { // Create a ledger byte[] ledgerPassword = "aaaa".getBytes(); LedgerHandle lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); long ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); String tmp = "BookKeeper is cool!"; int numEntries = 30; for (int i = 0; i < numEntries; i++) { lh.addEntry(tmp.getBytes()); } // Using wrong password ledgerPassword = "bbbb".getBytes(); try { lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); fail("Opening ledger with wrong password should fail"); } catch (BKException e) { // should failed } } @Test(timeout=60000) public void testLedgerRecoveryWithNotEnoughBookies() throws Exception { int numEntries = 3; // Create a ledger LedgerHandle beforelh = null; beforelh = bkc.createLedger(3, 3, digestType, "".getBytes()); String tmp = "BookKeeper is cool!"; for (int i = 0; i < numEntries; i++) { beforelh.addEntry(tmp.getBytes()); } // shutdown first bookie server bs.get(0).shutdown(); bs.remove(0); /* * Try to open ledger. */ try { bkc.openLedger(beforelh.getId(), digestType, "".getBytes()); fail("should not reach here!"); } catch (Exception e) { // should thrown recovery exception } // start a new bookie server startNewBookie(); LedgerHandle afterlh = bkc.openLedger(beforelh.getId(), digestType, "".getBytes()); /* * Check if has recovered properly. */ assertEquals(numEntries - 1, afterlh.getLastAddConfirmed()); } @Test(timeout=60000) public void testLedgerRecoveryWithSlowBookie() throws Exception { for (int i = 0; i < 3; i++) { LOG.info("TestLedgerRecoveryWithAckQuorum @ slow bookie {}", i); ledgerRecoveryWithSlowBookie(3, 3, 2, 1, i); } } private void ledgerRecoveryWithSlowBookie(int ensembleSize, int writeQuorumSize, int ackQuorumSize, int numEntries, int slowBookieIdx) throws Exception { // Create a ledger LedgerHandle beforelh = null; beforelh = bkc.createLedger(ensembleSize, writeQuorumSize, ackQuorumSize, digestType, "".getBytes()); // kill first bookie server to start a fake one to simulate a slow bookie // and failed to add entry on crash // until write succeed InetSocketAddress host = beforelh.getLedgerMetadata().currentEnsemble.get(slowBookieIdx); ServerConfiguration conf = killBookie(host); Bookie fakeBookie = new Bookie(conf) { @Override public void addEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { // drop request to simulate a slow and failed bookie } }; bsConfs.add(conf); bs.add(startBookie(conf, fakeBookie)); // avoid not-enough-bookies case startNewBookie(); // write would still succeed with 2 bookies ack String tmp = "BookKeeper is cool!"; for (int i = 0; i < numEntries; i++) { beforelh.addEntry(tmp.getBytes()); } conf = killBookie(host); bsConfs.add(conf); // the bookie goes normally bs.add(startBookie(conf)); /* * Try to open ledger. */ LedgerHandle afterlh = bkc.openLedger(beforelh.getId(), digestType, "".getBytes()); /* * Check if has recovered properly. */ assertEquals(numEntries - 1, afterlh.getLastAddConfirmed()); } /** * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-355} * A recovery during a rolling restart shouldn't affect the ability * to recovery the ledger later. * We have a ledger on ensemble B1,B2,B3. * The sequence of events is * 1. B1 brought down for maintenance * 2. Ledger recovery started * 3. B2 answers read last confirmed. * 4. B1 replaced in ensemble by B4 * 5. Write to B4 fails for some reason * 6. B1 comes back up. * 7. B2 goes down for maintenance. * 8. Ledger recovery starts (ledger is now unavailable) */ @Test(timeout=60000) public void testLedgerRecoveryWithRollingRestart() throws Exception { LedgerHandle lhbefore = bkc.createLedger(numBookies, 2, digestType, "".getBytes()); for (int i = 0; i < (numBookies*3)+1; i++) { lhbefore.addEntry("data".getBytes()); } // Add a dead bookie to the cluster ServerConfiguration conf = newServerConfiguration(); Bookie deadBookie1 = new Bookie(conf) { @Override public void recoveryAddEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { // drop request to simulate a slow and failed bookie throw new IOException("Couldn't write for some reason"); } }; bsConfs.add(conf); bs.add(startBookie(conf, deadBookie1)); // kill first bookie server InetSocketAddress bookie1 = lhbefore.getLedgerMetadata().currentEnsemble.get(0); ServerConfiguration conf1 = killBookie(bookie1); // Try to recover and fence the ledger after killing one bookie in the // ensemble in the ensemble, and another bookie is available in zk, but not writtable try { bkc.openLedger(lhbefore.getId(), digestType, "".getBytes()); fail("Shouldn't be able to open ledger, there should be entries missing"); } catch (BKException.BKLedgerRecoveryException e) { // expected } // restart the first server, kill the second bsConfs.add(conf1); bs.add(startBookie(conf1)); InetSocketAddress bookie2 = lhbefore.getLedgerMetadata().currentEnsemble.get(1); ServerConfiguration conf2 = killBookie(bookie2); // using async, because this could trigger an assertion final AtomicInteger returnCode = new AtomicInteger(0); final CountDownLatch openLatch = new CountDownLatch(1); bkc.asyncOpenLedger(lhbefore.getId(), digestType, "".getBytes(), new AsyncCallback.OpenCallback() { public void openComplete(int rc, LedgerHandle lh, Object ctx) { returnCode.set(rc); openLatch.countDown(); if (rc != BKException.Code.OK) { try { lh.close(); } catch (Exception e) { LOG.error("Exception closing ledger handle", e); } } } }, null); assertTrue("Open call should have completed", openLatch.await(5, TimeUnit.SECONDS)); assertFalse("Open should not have succeeded", returnCode.get() == BKException.Code.OK); bsConfs.add(conf2); bs.add(startBookie(conf2)); LedgerHandle lhafter = bkc.openLedger(lhbefore.getId(), digestType, "".getBytes()); assertEquals("Fenced ledger should have correct lastAddConfirmed", lhbefore.getLastAddConfirmed(), lhafter.getLastAddConfirmed()); } /** * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-355} * Verify that if a recovery happens with 1 replica missing, and it's replaced * with a faulty bookie, it doesn't break future recovery from happening. * 1. Ledger is created with quorum size as 2, and entries are written * 2. Now first bookie is in the ensemble is brought down. * 3. Another client fence and trying to recover the same ledger * 4. During this time ensemble change will happen * and new bookie will be added. But this bookie is not able to write. * 5. This recovery will fail. * 7. A new non-faulty bookie comes up * 8. Another client trying to recover the same ledger. */ @Test(timeout=60000) public void testBookieFailureDuringRecovery() throws Exception { LedgerHandle lhbefore = bkc.createLedger(numBookies, 2, digestType, "".getBytes()); for (int i = 0; i < (numBookies*3)+1; i++) { lhbefore.addEntry("data".getBytes()); } // Add a dead bookie to the cluster ServerConfiguration conf = newServerConfiguration(); Bookie deadBookie1 = new Bookie(conf) { @Override public void recoveryAddEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { // drop request to simulate a slow and failed bookie throw new IOException("Couldn't write for some reason"); } }; bsConfs.add(conf); bs.add(startBookie(conf, deadBookie1)); // kill first bookie server InetSocketAddress bookie1 = lhbefore.getLedgerMetadata().currentEnsemble.get(0); ServerConfiguration conf1 = killBookie(bookie1); // Try to recover and fence the ledger after killing one bookie in the // ensemble in the ensemble, and another bookie is available in zk but not writtable try { bkc.openLedger(lhbefore.getId(), digestType, "".getBytes()); fail("Shouldn't be able to open ledger, there should be entries missing"); } catch (BKException.BKLedgerRecoveryException e) { // expected } // start a new good server startNewBookie(); LedgerHandle lhafter = bkc.openLedger(lhbefore.getId(), digestType, "".getBytes()); assertEquals("Fenced ledger should have correct lastAddConfirmed", lhbefore.getLastAddConfirmed(), lhafter.getLastAddConfirmed()); } /** * Verify that it doesn't break the recovery when changing ensemble in * recovery add. */ @Test(timeout = 60000) public void testEnsembleChangeDuringRecovery() throws Exception { LedgerHandle lh = bkc.createLedger(numBookies, 2, 2, digestType, "".getBytes()); int numEntries = (numBookies * 3) + 1; final AtomicInteger numPendingAdds = new AtomicInteger(numEntries); final CountDownLatch addDone = new CountDownLatch(1); for (int i = 0; i < numEntries; i++) { lh.asyncAddEntry("data".getBytes(), new AddCallback() { @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { if (BKException.Code.OK != rc) { addDone.countDown(); return; } if (numPendingAdds.decrementAndGet() == 0) { addDone.countDown(); } } }, null); } addDone.await(10, TimeUnit.SECONDS); if (numPendingAdds.get() > 0) { fail("Failed to add " + numEntries + " to ledger handle " + lh.getId()); } // kill first 2 bookies to replace bookies InetSocketAddress bookie1 = lh.getLedgerMetadata().currentEnsemble.get(0); ServerConfiguration conf1 = killBookie(bookie1); InetSocketAddress bookie2 = lh.getLedgerMetadata().currentEnsemble.get(1); ServerConfiguration conf2 = killBookie(bookie2); // replace these two bookies startDeadBookie(conf1); startDeadBookie(conf2); // kick in two brand new bookies startNewBookie(); startNewBookie(); // two dead bookies are put in the ensemble which would cause ensemble // change LedgerHandle recoveredLh = bkc.openLedger(lh.getId(), digestType, "".getBytes()); assertEquals("Fenced ledger should have correct lastAddConfirmed", lh.getLastAddConfirmed(), recoveredLh.getLastAddConfirmed()); } private void startDeadBookie(ServerConfiguration conf) throws Exception { Bookie rBookie = new Bookie(conf) { @Override public void recoveryAddEntry(ByteBuffer entry, WriteCallback cb, Object ctx, byte[] masterKey) throws IOException, BookieException { // drop request to simulate a dead bookie throw new IOException("Couldn't write entries for some reason"); } }; bsConfs.add(conf); bs.add(startBookie(conf, rBookie)); } } ListLedgersTest.java000066400000000000000000000071241244507361200346770ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with this * work for additional information regarding copyright ownership. The ASF * licenses this file to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations under * the License. */ package org.apache.bookkeeper.client; import java.util.Iterator; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.test.BaseTestCase; import org.apache.zookeeper.KeeperException; import org.junit.Assert; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class ListLedgersTest extends BaseTestCase { static Logger LOG = LoggerFactory.getLogger(ListLedgersTest.class); DigestType digestType; public ListLedgersTest (DigestType digestType) { super(4); this.digestType = digestType; } @Test(timeout=60000) public void testListLedgers() throws Exception { int numOfLedgers = 10; ClientConfiguration conf = new ClientConfiguration() .setZkServers(zkUtil.getZooKeeperConnectString()); BookKeeper bkc = new BookKeeper(conf); for (int i = 0; i < numOfLedgers ; i++) { bkc.createLedger(digestType, "testPasswd". getBytes()).close(); } BookKeeperAdmin admin = new BookKeeperAdmin(zkUtil. getZooKeeperConnectString()); Iterable iterable = admin.listLedgers(); int counter = 0; for (Long lId: iterable) { counter++; } Assert.assertTrue("Wrong number of ledgers: " + numOfLedgers, counter == numOfLedgers); } @Test(timeout=60000) public void testEmptyList() throws Exception { ClientConfiguration conf = new ClientConfiguration() .setZkServers(zkUtil.getZooKeeperConnectString()); BookKeeperAdmin admin = new BookKeeperAdmin(zkUtil. getZooKeeperConnectString()); Iterable iterable = admin.listLedgers(); LOG.info("Empty list assertion"); Assert.assertFalse("There should be no ledger", iterable.iterator().hasNext()); } @Test(timeout=60000) public void testRemoveNotSupported() throws Exception { int numOfLedgers = 1; ClientConfiguration conf = new ClientConfiguration() .setZkServers(zkUtil.getZooKeeperConnectString()); BookKeeper bkc = new BookKeeper(conf); for (int i = 0; i < numOfLedgers ; i++) { bkc.createLedger(digestType, "testPasswd". getBytes()).close(); } BookKeeperAdmin admin = new BookKeeperAdmin(zkUtil. getZooKeeperConnectString()); Iterator iterator = admin.listLedgers().iterator(); iterator.next(); try{ iterator.remove(); } catch (UnsupportedOperationException e) { // This exception is expected return; } Assert.fail("Remove is not supported, we shouln't have reached this point"); } } RoundRobinDistributionScheduleTest.java000066400000000000000000000114061244507361200406120ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.util.List; import java.util.Set; import java.util.HashSet; import com.google.common.collect.Sets; import org.junit.Test; import static org.junit.Assert.*; import junit.framework.TestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class RoundRobinDistributionScheduleTest { static Logger LOG = LoggerFactory.getLogger(RoundRobinDistributionScheduleTest.class); @Test(timeout=60000) public void testDistributionSchedule() throws Exception { RoundRobinDistributionSchedule schedule = new RoundRobinDistributionSchedule(3, 2, 5); List wSet = schedule.getWriteSet(1); assertEquals("Write set is wrong size", wSet.size(), 3); DistributionSchedule.AckSet ackSet = schedule.getAckSet(); assertFalse("Shouldn't ack yet", ackSet.addBookieAndCheck(wSet.get(0))); assertFalse("Shouldn't ack yet", ackSet.addBookieAndCheck(wSet.get(0))); assertTrue("Should ack after 2 unique", ackSet.addBookieAndCheck(wSet.get(2))); assertTrue("Should still be acking", ackSet.addBookieAndCheck(wSet.get(1))); } /** * Test that coverage sets only respond as covered when it has * heard from enough bookies that no ack quorum can exist without these bookies. */ @Test(timeout=60000) public void testCoverageSets() { int errors = 0; for (int e = 6; e > 0; e--) { for (int w = e; w > 0; w--) { for (int a = w; a > 0; a--) { errors += testCoverageForConfiguration(e, w, a); } } } assertEquals("Should be no errors", 0, errors); } /** * Build a boolean array of which nodes have not responded * and thus are available to build a quorum. */ boolean[] buildAvailable(int ensemble, Set responses) { boolean[] available = new boolean[ensemble]; for (int i = 0; i < ensemble; i++) { if (responses.contains(i)) { available[i] = false; } else { available[i] = true; } } return available; } /** * Check whether it is possible for a write to reach * a quorum with a given set of nodes available */ boolean canGetAckQuorum(int ensemble, int writeQuorum, int ackQuorum, boolean[] available) { for (int i = 0; i < ensemble; i++) { int count = 0; for (int j = 0; j < writeQuorum; j++) { if (available[(i+j)%ensemble]) { count++; } } if (count >= ackQuorum) { return true; } } return false; } private int testCoverageForConfiguration(int ensemble, int writeQuorum, int ackQuorum) { RoundRobinDistributionSchedule schedule = new RoundRobinDistributionSchedule( writeQuorum, ackQuorum, ensemble); Set indexes = new HashSet(); for (int i = 0; i < ensemble; i++) { indexes.add(i); } Set> subsets = Sets.powerSet(indexes); int errors = 0; for (Set subset : subsets) { DistributionSchedule.QuorumCoverageSet covSet = schedule.getCoverageSet(); boolean covSetSays = false; for (Integer i : subset) { covSetSays = covSet.addBookieAndCheckCovered(i); } boolean[] nodesAvailable = buildAvailable(ensemble, subset); boolean canGetAck = canGetAckQuorum(ensemble, writeQuorum, ackQuorum, nodesAvailable); if (canGetAck == covSetSays) { LOG.error("e{}:w{}:a{} available {} canGetAck {} covSetSays {}", new Object[] { ensemble, writeQuorum, ackQuorum, nodesAvailable, canGetAck, covSetSays }); errors++; } } return errors; } } SlowBookieTest.java000066400000000000000000000174131244507361200345350ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.util.Set; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicBoolean; import java.net.InetSocketAddress; import org.junit.Test; import static org.junit.Assert.*; import junit.framework.TestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.conf.ClientConfiguration; @SuppressWarnings("deprecation") public class SlowBookieTest extends BookKeeperClusterTestCase { static Logger LOG = LoggerFactory.getLogger(SlowBookieTest.class); public SlowBookieTest() { super(4); } @Test(timeout=60000) public void testSlowBookie() throws Exception { ClientConfiguration conf = new ClientConfiguration(); conf.setZkServers(zkUtil.getZooKeeperConnectString()).setReadTimeout(360); BookKeeper bkc = new BookKeeper(conf); LedgerHandle lh = bkc.createLedger(4, 3, 2, BookKeeper.DigestType.CRC32, new byte[] {}); byte[] entry = "Test Entry".getBytes(); for (int i = 0; i < 10; i++) { lh.addEntry(entry); } final CountDownLatch b0latch = new CountDownLatch(1); final CountDownLatch b1latch = new CountDownLatch(1); List curEns = lh.getLedgerMetadata().currentEnsemble; try { sleepBookie(curEns.get(0), b0latch); for (int i = 0; i < 10; i++) { lh.addEntry(entry); } sleepBookie(curEns.get(2), b1latch); // should cover all quorums final AtomicInteger i = new AtomicInteger(0xdeadbeef); AsyncCallback.AddCallback cb = new AsyncCallback.AddCallback() { public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { i.set(rc); } }; lh.asyncAddEntry(entry, cb, null); Thread.sleep(1000); // sleep a second to allow time to complete assertEquals(i.get(), 0xdeadbeef); b0latch.countDown(); b1latch.countDown(); Thread.sleep(2000); assertEquals(i.get(), BKException.Code.OK); } finally { b0latch.countDown(); b1latch.countDown(); } } @Test(timeout=60000) public void testBookieFailureWithSlowBookie() throws Exception { ClientConfiguration conf = new ClientConfiguration(); conf.setZkServers(zkUtil.getZooKeeperConnectString()).setReadTimeout(5); BookKeeper bkc = new BookKeeper(conf); byte[] pwd = new byte[] {}; final LedgerHandle lh = bkc.createLedger(4, 3, 2, BookKeeper.DigestType.CRC32, pwd); long lid = lh.getId(); final AtomicBoolean finished = new AtomicBoolean(false); final AtomicBoolean failTest = new AtomicBoolean(false); final byte[] entry = "Test Entry".getBytes(); Thread t = new Thread() { public void run() { try { while (!finished.get()) { lh.addEntry(entry); } } catch (Exception e) { LOG.error("Exception in add entry thread", e); failTest.set(true); } } }; t.start(); final CountDownLatch b0latch = new CountDownLatch(1); startNewBookie(); sleepBookie(getBookie(0), b0latch); Thread.sleep(10000); b0latch.countDown(); finished.set(true); t.join(); assertFalse(failTest.get()); lh.close(); LedgerHandle lh2 = bkc.openLedger(lh.getId(), BookKeeper.DigestType.CRC32, pwd); LedgerChecker lc = new LedgerChecker(bkc); final CountDownLatch checklatch = new CountDownLatch(1); final AtomicInteger numFragments = new AtomicInteger(-1); lc.checkLedger(lh2, new GenericCallback>() { public void operationComplete(int rc, Set fragments) { LOG.debug("Checked ledgers returned {} {}", rc, fragments); if (rc == BKException.Code.OK) { numFragments.set(fragments.size()); } checklatch.countDown(); } }); checklatch.await(); assertEquals("There should be no missing fragments", 0, numFragments.get()); } @Test(timeout=60000) public void testManyBookieFailureWithSlowBookies() throws Exception { ClientConfiguration conf = new ClientConfiguration(); conf.setZkServers(zkUtil.getZooKeeperConnectString()).setReadTimeout(5); BookKeeper bkc = new BookKeeper(conf); byte[] pwd = new byte[] {}; final LedgerHandle lh = bkc.createLedger(4, 3, 1, BookKeeper.DigestType.CRC32, pwd); long lid = lh.getId(); final AtomicBoolean finished = new AtomicBoolean(false); final AtomicBoolean failTest = new AtomicBoolean(false); final byte[] entry = "Test Entry".getBytes(); Thread t = new Thread() { public void run() { try { while (!finished.get()) { lh.addEntry(entry); } } catch (Exception e) { LOG.error("Exception in add entry thread", e); failTest.set(true); } } }; t.start(); final CountDownLatch b0latch = new CountDownLatch(1); final CountDownLatch b1latch = new CountDownLatch(1); startNewBookie(); startNewBookie(); sleepBookie(getBookie(0), b0latch); sleepBookie(getBookie(1), b1latch); Thread.sleep(10000); b0latch.countDown(); b1latch.countDown(); finished.set(true); t.join(); assertFalse(failTest.get()); lh.close(); LedgerHandle lh2 = bkc.openLedger(lh.getId(), BookKeeper.DigestType.CRC32, pwd); LedgerChecker lc = new LedgerChecker(bkc); final CountDownLatch checklatch = new CountDownLatch(1); final AtomicInteger numFragments = new AtomicInteger(-1); lc.checkLedger(lh2, new GenericCallback>() { public void operationComplete(int rc, Set fragments) { LOG.debug("Checked ledgers returned {} {}", rc, fragments); if (rc == BKException.Code.OK) { numFragments.set(fragments.size()); } checklatch.countDown(); } }); checklatch.await(); assertEquals("There should be no missing fragments", 0, numFragments.get()); } } TestFencing.java000066400000000000000000000323771244507361200340370ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import org.junit.*; import java.net.InetSocketAddress; import java.util.Enumeration; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.CountDownLatch; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.test.BaseTestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This unit test tests ledger fencing; * */ public class TestFencing extends BaseTestCase { static Logger LOG = LoggerFactory.getLogger(TestFencing.class); DigestType digestType; public TestFencing(DigestType digestType) { super(10); this.digestType = digestType; } /** * Basic fencing test. Create ledger, write to it, * open ledger, write again (should fail). */ @Test(timeout=60000) public void testBasicFencing() throws Exception { /* * Create ledger. */ LedgerHandle writelh = null; writelh = bkc.createLedger(digestType, "password".getBytes()); String tmp = "BookKeeper is cool!"; for (int i = 0; i < 10; i++) { writelh.addEntry(tmp.getBytes()); } /* * Try to open ledger. */ LedgerHandle readlh = bkc.openLedger(writelh.getId(), digestType, "password".getBytes()); // should have triggered recovery and fencing try { writelh.addEntry(tmp.getBytes()); LOG.error("Should have thrown an exception"); fail("Should have thrown an exception when trying to write"); } catch (BKException.BKLedgerFencedException e) { // correct behaviour } /* * Check if has recovered properly. */ assertTrue("Has not recovered correctly: " + readlh.getLastAddConfirmed() + " original " + writelh.getLastAddConfirmed(), readlh.getLastAddConfirmed() == writelh.getLastAddConfirmed()); } private static int threadCount = 0; class LedgerOpenThread extends Thread { private final long ledgerId; private long lastConfirmedEntry = 0; private final DigestType digestType; private final CyclicBarrier barrier; LedgerOpenThread (DigestType digestType, long ledgerId, CyclicBarrier barrier) throws Exception { super("TestFencing-LedgerOpenThread-" + threadCount++); this.ledgerId = ledgerId; this.digestType = digestType; this.barrier = barrier; } @Override public void run() { LedgerHandle lh = null; BookKeeper bk = null; try { barrier.await(); while(true) { try { bk = new BookKeeper(new ClientConfiguration(baseClientConf), bkc.getZkHandle()); lh = bk.openLedger(ledgerId, digestType, "".getBytes()); lastConfirmedEntry = lh.getLastAddConfirmed(); lh.close(); break; } catch (BKException.BKMetadataVersionException zke) { LOG.info("Contention with someone else recovering"); } catch (BKException.BKLedgerRecoveryException bkre) { LOG.info("Contention with someone else recovering"); } finally { if (lh != null) { lh.close(); } if (bk != null) { bk.close(); bk = null; } } } } catch (Exception e) { // just exit, test should spot bad last add confirmed LOG.error("Exception occurred ", e); } LOG.info("Thread exiting, lastConfirmedEntry = " + lastConfirmedEntry); } long getLastConfirmedEntry() { return lastConfirmedEntry; } } /** * Try to open a ledger many times in parallel. * All opens should result in a ledger with an equals number of * entries. */ @Test(timeout=60000) public void testManyOpenParallel() throws Exception { /* * Create ledger. */ final LedgerHandle writelh = bkc.createLedger(digestType, "".getBytes()); final int numRecovery = 10; final String tmp = "BookKeeper is cool!"; final CountDownLatch latch = new CountDownLatch(numRecovery); Thread writethread = new Thread() { public void run() { try { while (true) { writelh.addEntry(tmp.getBytes()); latch.countDown(); } } catch (Exception e) { LOG.info("Exception adding entry", e); } } }; writethread.start(); CyclicBarrier barrier = new CyclicBarrier(numRecovery+1); LedgerOpenThread threads[] = new LedgerOpenThread[numRecovery]; for (int i = 0; i < numRecovery; i++) { threads[i] = new LedgerOpenThread(digestType, writelh.getId(), barrier); threads[i].start(); } latch.await(); barrier.await(); // should trigger threads to go writethread.join(); long lastConfirmed = writelh.getLastAddConfirmed(); for (int i = 0; i < numRecovery; i++) { threads[i].join(); assertTrue("Added confirmed is incorrect", lastConfirmed <= threads[i].getLastConfirmedEntry()); } } /** * Test that opening a ledger in norecovery mode * doesn't fence off a ledger */ @Test(timeout=60000) public void testNoRecoveryOpen() throws Exception { /* * Create ledger. */ LedgerHandle writelh = null; writelh = bkc.createLedger(digestType, "".getBytes()); String tmp = "BookKeeper is cool!"; final int numEntries = 10; for (int i = 0; i < numEntries; i++) { writelh.addEntry(tmp.getBytes()); } /* * Try to open ledger. */ LedgerHandle readlh = bkc.openLedgerNoRecovery(writelh.getId(), digestType, "".getBytes()); // should not have triggered recovery and fencing writelh.addEntry(tmp.getBytes()); long numReadable = readlh.getLastAddConfirmed(); LOG.error("numRead " + numReadable); Enumeration entries = readlh.readEntries(1, numReadable); try { readlh.readEntries(numReadable+1, numReadable+1); fail("Shouldn't have been able to read this far"); } catch (BKException.BKReadException e) { // all is good } writelh.addEntry(tmp.getBytes()); long numReadable2 = readlh.getLastAddConfirmed(); assertEquals("Number of readable entries hasn't changed", numReadable2, numReadable); readlh.close(); writelh.addEntry(tmp.getBytes()); writelh.close(); } /** * create a ledger and write entries. * kill a bookie in the ensemble. Recover. * Fence the ledger. Kill another bookie. Recover. */ @Test(timeout=60000) public void testFencingInteractionWithBookieRecovery() throws Exception { System.setProperty("digestType", digestType.toString()); System.setProperty("passwd", "testPasswd"); BookKeeperAdmin admin = new BookKeeperAdmin(zkUtil.getZooKeeperConnectString()); LedgerHandle writelh = bkc.createLedger(digestType, "testPasswd".getBytes()); String tmp = "Foobar"; final int numEntries = 10; for (int i = 0; i < numEntries; i++) { writelh.addEntry(tmp.getBytes()); } InetSocketAddress bookieToKill = writelh.getLedgerMetadata().getEnsemble(numEntries).get(0); killBookie(bookieToKill); // write entries to change ensemble for (int i = 0; i < numEntries; i++) { writelh.addEntry(tmp.getBytes()); } admin.recoverBookieData(bookieToKill, null); for (int i = 0; i < numEntries; i++) { writelh.addEntry(tmp.getBytes()); } LedgerHandle readlh = bkc.openLedger(writelh.getId(), digestType, "testPasswd".getBytes()); try { writelh.addEntry(tmp.getBytes()); LOG.error("Should have thrown an exception"); fail("Should have thrown an exception when trying to write"); } catch (BKException.BKLedgerFencedException e) { // correct behaviour } readlh.close(); writelh.close(); } /** * create a ledger and write entries. * Fence the ledger. Kill a bookie. Recover. * Ensure that recover doesn't reallow adding */ @Test(timeout=60000) public void testFencingInteractionWithBookieRecovery2() throws Exception { System.setProperty("digestType", digestType.toString()); System.setProperty("passwd", "testPasswd"); BookKeeperAdmin admin = new BookKeeperAdmin(zkUtil.getZooKeeperConnectString()); LedgerHandle writelh = bkc.createLedger(digestType, "testPasswd".getBytes()); String tmp = "Foobar"; final int numEntries = 10; for (int i = 0; i < numEntries; i++) { writelh.addEntry(tmp.getBytes()); } LedgerHandle readlh = bkc.openLedger(writelh.getId(), digestType, "testPasswd".getBytes()); // should be fenced by now InetSocketAddress bookieToKill = writelh.getLedgerMetadata().getEnsemble(numEntries).get(0); killBookie(bookieToKill); admin.recoverBookieData(bookieToKill, null); try { writelh.addEntry(tmp.getBytes()); LOG.error("Should have thrown an exception"); fail("Should have thrown an exception when trying to write"); } catch (BKException.BKLedgerFencedException e) { // correct behaviour } readlh.close(); writelh.close(); } /** * Test that fencing doesn't work with a bad password */ @Test(timeout=60000) public void testFencingBadPassword() throws Exception { /* * Create ledger. */ LedgerHandle writelh = null; writelh = bkc.createLedger(digestType, "password1".getBytes()); String tmp = "BookKeeper is cool!"; for (int i = 0; i < 10; i++) { writelh.addEntry(tmp.getBytes()); } /* * Try to open ledger. */ try { LedgerHandle readlh = bkc.openLedger(writelh.getId(), digestType, "badPassword".getBytes()); fail("Should not have been able to open with a bad password"); } catch (BKException.BKUnauthorizedAccessException uue) { // correct behaviour } // should have triggered recovery and fencing writelh.addEntry(tmp.getBytes()); } @Test public void testFencingAndRestartBookies() throws Exception { LedgerHandle writelh = null; writelh = bkc.createLedger(digestType, "password".getBytes()); String tmp = "BookKeeper is cool!"; for (int i = 0; i < 10; i++) { writelh.addEntry(tmp.getBytes()); } /* * Try to open ledger. */ LedgerHandle readlh = bkc.openLedger(writelh.getId(), digestType, "password".getBytes()); restartBookies(); try { writelh.addEntry(tmp.getBytes()); LOG.error("Should have thrown an exception"); fail("Should have thrown an exception when trying to write"); } catch (BKException.BKLedgerFencedException e) { // correct behaviour } readlh.close(); } } TestLedgerChecker.java000066400000000000000000000436451244507361200351550ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Set; import java.util.concurrent.CountDownLatch; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests the functionality of LedgerChecker. This Ledger checker should be able * to detect the correct underReplicated fragment */ public class TestLedgerChecker extends BookKeeperClusterTestCase { private static final byte[] TEST_LEDGER_ENTRY_DATA = "TestCheckerData" .getBytes(); private static final byte[] TEST_LEDGER_PASSWORD = "testpasswd".getBytes(); static Logger LOG = LoggerFactory.getLogger(TestLedgerChecker.class); public TestLedgerChecker() { super(3); } class CheckerCallback implements GenericCallback> { private Set result = null; private CountDownLatch latch = new CountDownLatch(1); public void operationComplete(int rc, Set result) { this.result = result; latch.countDown(); } Set waitAndGetResult() throws InterruptedException { latch.await(); return result; } } /** * Tests that the LedgerChecker should detect the underReplicated fragments * on multiple Bookie crashes */ @Test(timeout=60000) public void testChecker() throws Exception { LedgerHandle lh = bkc.createLedger(BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); startNewBookie(); for (int i = 0; i < 10; i++) { lh.addEntry(TEST_LEDGER_ENTRY_DATA); } InetSocketAddress replicaToKill = lh.getLedgerMetadata().getEnsembles() .get(0L).get(0); LOG.info("Killing {}", replicaToKill); killBookie(replicaToKill); for (int i = 0; i < 10; i++) { lh.addEntry(TEST_LEDGER_ENTRY_DATA); } Set result = getUnderReplicatedFragments(lh); assertNotNull("Result shouldn't be null", result); for (LedgerFragment r : result) { LOG.info("unreplicated fragment: {}", r); } assertEquals("Should have one missing fragment", 1, result.size()); assertEquals("Fragment should be missing from first replica", result .iterator().next().getAddress(), replicaToKill); InetSocketAddress replicaToKill2 = lh.getLedgerMetadata() .getEnsembles().get(0L).get(1); LOG.info("Killing {}", replicaToKill2); killBookie(replicaToKill2); result = getUnderReplicatedFragments(lh); assertNotNull("Result shouldn't be null", result); for (LedgerFragment r : result) { LOG.info("unreplicated fragment: {}", r); } assertEquals("Should have three missing fragments", 3, result.size()); } /** * Tests that ledger checker should pick the fragment as bad only if any of * the fragment entries not meeting the quorum. */ // ///////////////////////////////////////////////////// // /////////Ensemble = 3, Quorum = 2 /////////////////// // /Sample Ledger meta data should look like//////////// // /0 a b c /////*entry present in a,b. Now kill c////// // /1 a b d //////////////////////////////////////////// // /Here even though one BK failed at this stage, ////// // /we don't have any missed entries. Quorum satisfied// // /So, there should not be any missing replicas./////// // ///////////////////////////////////////////////////// @Test(timeout = 3000) public void testShouldNotGetTheFragmentIfThereIsNoMissedEntry() throws Exception { LedgerHandle lh = bkc.createLedger(3, 2, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); lh.addEntry(TEST_LEDGER_ENTRY_DATA); // Entry should have added in first 2 Bookies. // Kill the 3rd BK from ensemble. ArrayList firstEnsemble = lh.getLedgerMetadata() .getEnsembles().get(0L); InetSocketAddress lastBookieFromEnsemble = firstEnsemble.get(2); LOG.info("Killing " + lastBookieFromEnsemble + " from ensemble=" + firstEnsemble); killBookie(lastBookieFromEnsemble); startNewBookie(); LOG.info("Ensembles after first entry :" + lh.getLedgerMetadata().getEnsembles()); // Adding one more entry. Here enseble should be reformed. lh.addEntry(TEST_LEDGER_ENTRY_DATA); LOG.info("Ensembles after second entry :" + lh.getLedgerMetadata().getEnsembles()); Set result = getUnderReplicatedFragments(lh); assertNotNull("Result shouldn't be null", result); for (LedgerFragment r : result) { LOG.info("unreplicated fragment: {}", r); } assertEquals("Should not have any missing fragment", 0, result.size()); } /** * Tests that LedgerChecker should give two fragments when 2 bookies failed * in same ensemble when ensemble = 3, quorum = 2 */ @Test(timeout = 3000) public void testShouldGetTwoFrgamentsIfTwoBookiesFailedInSameEnsemble() throws Exception { LedgerHandle lh = bkc.createLedger(3, 2, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); startNewBookie(); startNewBookie(); lh.addEntry(TEST_LEDGER_ENTRY_DATA); ArrayList firstEnsemble = lh.getLedgerMetadata() .getEnsembles().get(0L); InetSocketAddress firstBookieFromEnsemble = firstEnsemble.get(0); killBookie(firstEnsemble, firstBookieFromEnsemble); InetSocketAddress secondBookieFromEnsemble = firstEnsemble.get(1); killBookie(firstEnsemble, secondBookieFromEnsemble); lh.addEntry(TEST_LEDGER_ENTRY_DATA); Set result = getUnderReplicatedFragments(lh); assertNotNull("Result shouldn't be null", result); for (LedgerFragment r : result) { LOG.info("unreplicated fragment: {}", r); } assertEquals("There should be 2 fragments", 2, result.size()); } /** * Tests that LedgerChecker should not get any underReplicated fragments, if * corresponding ledger does not exists. */ @Test(timeout = 3000) public void testShouldNotGetAnyFragmentIfNoLedgerPresent() throws Exception { LedgerHandle lh = bkc.createLedger(3, 2, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); ArrayList firstEnsemble = lh.getLedgerMetadata() .getEnsembles().get(0L); InetSocketAddress firstBookieFromEnsemble = firstEnsemble.get(0); killBookie(firstBookieFromEnsemble); startNewBookie(); lh.addEntry(TEST_LEDGER_ENTRY_DATA); bkc.deleteLedger(lh.getId()); Set result = getUnderReplicatedFragments(lh); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 0 fragments. But returned fragments are " + result, 0, result.size()); } /** * Tests that LedgerChecker should get failed ensemble number of fragments * if ensemble bookie failures on next entry */ @Test(timeout = 3000) public void testShouldGetFailedEnsembleNumberOfFgmntsIfEnsembleBookiesFailedOnNextWrite() throws Exception { startNewBookie(); startNewBookie(); LedgerHandle lh = bkc.createLedger(3, 2, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); for (int i = 0; i < 3; i++) { lh.addEntry(TEST_LEDGER_ENTRY_DATA); } // Kill all three bookies ArrayList firstEnsemble = lh.getLedgerMetadata() .getEnsembles().get(0L); for (InetSocketAddress bkAddr : firstEnsemble) { killBookie(firstEnsemble, bkAddr); } Set result = getUnderReplicatedFragments(lh); assertNotNull("Result shouldn't be null", result); for (LedgerFragment r : result) { LOG.info("unreplicated fragment: {}", r); } assertEquals("There should be 3 fragments", 3, result.size()); } /** * Tests that LedgerChecker should not get any fragments as underReplicated * if Ledger itself is empty */ @Test(timeout = 3000) public void testShouldNotGetAnyFragmentWithEmptyLedger() throws Exception { LedgerHandle lh = bkc.createLedger(3, 2, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); Set result = getUnderReplicatedFragments(lh); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 0 fragments. But returned fragments are " + result, 0, result.size()); } /** * Tests that LedgerChecker should get all fragments if ledger is empty * but all bookies in the ensemble are down. * In this case, there's no way to tell whether data was written or not. * In this case, there'll only be two fragments, as quorum is 2 and we only * suspect that the first entry of the ledger could exist. */ @Test(timeout = 3000) public void testShouldGet2FragmentsWithEmptyLedgerButBookiesDead() throws Exception { LedgerHandle lh = bkc.createLedger(3, 2, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); for (InetSocketAddress b : lh.getLedgerMetadata().getEnsembles().get(0L)) { killBookie(b); } Set result = getUnderReplicatedFragments(lh); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 2 fragments.", 2, result.size()); } /** * Tests that LedgerChecker should one fragment as underReplicated * if there is an open ledger with single entry written. */ @Test(timeout = 3000) public void testShouldGetOneFragmentWithSingleEntryOpenedLedger() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); lh.addEntry(TEST_LEDGER_ENTRY_DATA); ArrayList firstEnsemble = lh.getLedgerMetadata() .getEnsembles().get(0L); InetSocketAddress lastBookieFromEnsemble = firstEnsemble.get(0); LOG.info("Killing " + lastBookieFromEnsemble + " from ensemble=" + firstEnsemble); killBookie(lastBookieFromEnsemble); startNewBookie(); //Open ledger separately for Ledger checker. LedgerHandle lh1 =bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); Set result = getUnderReplicatedFragments(lh1); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 1 fragment. But returned fragments are " + result, 1, result.size()); } /** * Tests that LedgerChecker correctly identifies missing fragments * when a single entry is written after an ensemble change. * This is important, as the last add confirmed may be less than the * first entry id of the final segment. */ @Test(timeout = 3000) public void testSingleEntryAfterEnsembleChange() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); for (int i = 0; i < 10; i++) { lh.addEntry(TEST_LEDGER_ENTRY_DATA); } ArrayList firstEnsemble = lh.getLedgerMetadata() .getEnsembles().get(0L); InetSocketAddress lastBookieFromEnsemble = firstEnsemble.get( lh.getDistributionSchedule().getWriteSet(lh.getLastAddPushed()).get(0)); LOG.info("Killing " + lastBookieFromEnsemble + " from ensemble=" + firstEnsemble); killBookie(lastBookieFromEnsemble); startNewBookie(); lh.addEntry(TEST_LEDGER_ENTRY_DATA); lastBookieFromEnsemble = firstEnsemble.get( lh.getDistributionSchedule().getWriteSet(lh.getLastAddPushed()).get(1)); LOG.info("Killing " + lastBookieFromEnsemble + " from ensemble=" + firstEnsemble); killBookie(lastBookieFromEnsemble); //Open ledger separately for Ledger checker. LedgerHandle lh1 =bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); Set result = getUnderReplicatedFragments(lh1); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 3 fragment. But returned fragments are " + result, 3, result.size()); } /** * Tests that LedgerChecker does not return any fragments * from a closed ledger with 0 entries. */ @Test(timeout = 3000) public void testClosedEmptyLedger() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); ArrayList firstEnsemble = lh.getLedgerMetadata() .getEnsembles().get(0L); lh.close(); InetSocketAddress lastBookieFromEnsemble = firstEnsemble.get(0); LOG.info("Killing " + lastBookieFromEnsemble + " from ensemble=" + firstEnsemble); killBookie(lastBookieFromEnsemble); //Open ledger separately for Ledger checker. LedgerHandle lh1 =bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); Set result = getUnderReplicatedFragments(lh1); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 0 fragment. But returned fragments are " + result, 0, result.size()); } /** * Tests that LedgerChecker does not return any fragments * from a closed ledger with 0 entries. */ @Test(timeout = 3000) public void testClosedSingleEntryLedger() throws Exception { LedgerHandle lh = bkc.createLedger(3, 2, BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); ArrayList firstEnsemble = lh.getLedgerMetadata() .getEnsembles().get(0L); lh.addEntry(TEST_LEDGER_ENTRY_DATA); lh.close(); // kill bookie 2 InetSocketAddress lastBookieFromEnsemble = firstEnsemble.get(2); LOG.info("Killing " + lastBookieFromEnsemble + " from ensemble=" + firstEnsemble); killBookie(lastBookieFromEnsemble); //Open ledger separately for Ledger checker. LedgerHandle lh1 =bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); Set result = getUnderReplicatedFragments(lh1); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 0 fragment. But returned fragments are " + result, 0, result.size()); lh1.close(); // kill bookie 1 lastBookieFromEnsemble = firstEnsemble.get(1); LOG.info("Killing " + lastBookieFromEnsemble + " from ensemble=" + firstEnsemble); killBookie(lastBookieFromEnsemble); startNewBookie(); //Open ledger separately for Ledger checker. lh1 =bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); result = getUnderReplicatedFragments(lh1); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 1 fragment. But returned fragments are " + result, 1, result.size()); lh1.close(); // kill bookie 0 lastBookieFromEnsemble = firstEnsemble.get(0); LOG.info("Killing " + lastBookieFromEnsemble + " from ensemble=" + firstEnsemble); killBookie(lastBookieFromEnsemble); startNewBookie(); //Open ledger separately for Ledger checker. lh1 =bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TEST_LEDGER_PASSWORD); result = getUnderReplicatedFragments(lh1); assertNotNull("Result shouldn't be null", result); assertEquals("There should be 2 fragment. But returned fragments are " + result, 2, result.size()); lh1.close(); } private Set getUnderReplicatedFragments(LedgerHandle lh) throws InterruptedException { LedgerChecker checker = new LedgerChecker(bkc); CheckerCallback cb = new CheckerCallback(); checker.checkLedger(lh, cb); Set result = cb.waitAndGetResult(); return result; } private void killBookie(ArrayList firstEnsemble, InetSocketAddress ensemble) throws Exception { LOG.info("Killing " + ensemble + " from ensemble=" + firstEnsemble); killBookie(ensemble); } } TestLedgerFragmentReplication.java000066400000000000000000000317371244507361200375450ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import java.net.InetAddress; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Enumeration; import java.util.Set; import java.util.SortedMap; import java.util.Map.Entry; import java.util.concurrent.CountDownLatch; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests BKAdmin that it should be able to replicate the failed bookie fragments * to target bookie. */ public class TestLedgerFragmentReplication extends BookKeeperClusterTestCase { private static final byte[] TEST_PSSWD = "testpasswd".getBytes(); private static final DigestType TEST_DIGEST_TYPE = BookKeeper.DigestType.CRC32; private static Logger LOG = LoggerFactory .getLogger(TestLedgerFragmentReplication.class); public TestLedgerFragmentReplication() { super(3); } private static class CheckerCallback implements GenericCallback> { private Set result = null; private CountDownLatch latch = new CountDownLatch(1); Set waitAndGetResult() throws InterruptedException { latch.await(); return result; } @Override public void operationComplete(int rc, Set result) { this.result = result; latch.countDown(); } } /** * Tests that replicate method should replicate the failed bookie fragments * to target bookie passed. */ @Test(timeout=60000) public void testReplicateLFShouldCopyFailedBookieFragmentsToTargetBookie() throws Exception { byte[] data = "TestLedgerFragmentReplication".getBytes(); LedgerHandle lh = bkc.createLedger(3, 3, TEST_DIGEST_TYPE, TEST_PSSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress replicaToKill = lh.getLedgerMetadata().getEnsembles() .get(0L).get(0); LOG.info("Killing Bookie", replicaToKill); killBookie(replicaToKill); int startNewBookie = startNewBookie(); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); LOG.info("New Bookie addr :" + newBkAddr); Set result = getFragmentsToReplicate(lh); BookKeeperAdmin admin = new BookKeeperAdmin(baseClientConf); lh.close(); // 0-9 entries should be copy to new bookie for (LedgerFragment lf : result) { admin.replicateLedgerFragment(lh, lf, newBkAddr); } // Killing all bookies except newly replicated bookie SortedMap> allBookiesBeforeReplication = lh .getLedgerMetadata().getEnsembles(); Set>> entrySet = allBookiesBeforeReplication .entrySet(); for (Entry> entry : entrySet) { ArrayList bookies = entry.getValue(); for (InetSocketAddress bookie : bookies) { if (newBkAddr.equals(bookie)) { continue; } killBookie(bookie); } } // Should be able to read the entries from 0-9 verifyRecoveredLedgers(lh, 0, 9); } /** * Tests that fragment re-replication fails on last unclosed ledger * fragments. */ @Test(timeout=60000) public void testReplicateLFFailsOnlyOnLastUnClosedFragments() throws Exception { byte[] data = "TestLedgerFragmentReplication".getBytes(); LedgerHandle lh = bkc.createLedger(3, 3, TEST_DIGEST_TYPE, TEST_PSSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress replicaToKill = lh.getLedgerMetadata().getEnsembles() .get(0L).get(0); startNewBookie(); LOG.info("Killing Bookie", replicaToKill); killBookie(replicaToKill); // Lets reform ensemble for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress replicaToKill2 = lh.getLedgerMetadata() .getEnsembles().get(0L).get(1); int startNewBookie2 = startNewBookie(); LOG.info("Killing Bookie", replicaToKill2); killBookie(replicaToKill2); InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie2); LOG.info("New Bookie addr :" + newBkAddr); Set result = getFragmentsToReplicate(lh); BookKeeperAdmin admin = new BookKeeperAdmin(baseClientConf); // 0-9 entries should be copy to new bookie int unclosedCount = 0; for (LedgerFragment lf : result) { if (lf.isClosed()) { admin.replicateLedgerFragment(lh, lf, newBkAddr); } else { unclosedCount++; try { admin.replicateLedgerFragment(lh, lf, newBkAddr); fail("Shouldn't be able to rereplicate unclosed ledger"); } catch (BKException bke) { // correct behaviour } } } assertEquals("Should be only one unclosed fragment", 1, unclosedCount); } /** * Tests that ReplicateLedgerFragment should return false if replication * fails */ @Test(timeout=60000) public void testReplicateLFShouldReturnFalseIfTheReplicationFails() throws Exception { byte[] data = "TestLedgerFragmentReplication".getBytes(); LedgerHandle lh = bkc.createLedger(2, 1, TEST_DIGEST_TYPE, TEST_PSSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } // Kill the first Bookie InetSocketAddress replicaToKill = lh.getLedgerMetadata().getEnsembles() .get(0L).get(0); killBookie(replicaToKill); LOG.info("Killed Bookie =" + replicaToKill); // Write some more entries for (int i = 0; i < 10; i++) { lh.addEntry(data); } // Kill the second Bookie replicaToKill = lh.getLedgerMetadata().getEnsembles().get(0L).get(0); killBookie(replicaToKill); LOG.info("Killed Bookie =" + replicaToKill); Set fragments = getFragmentsToReplicate(lh); BookKeeperAdmin admin = new BookKeeperAdmin(baseClientConf); int startNewBookie = startNewBookie(); InetSocketAddress additionalBK = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); for (LedgerFragment lf : fragments) { try { admin.replicateLedgerFragment(lh, lf, additionalBK); } catch (BKException.BKLedgerRecoveryException e) { // expected } } } /** * Tests that splitIntoSubFragment should be able to split the original * passed fragment into sub fragments at correct boundaries */ @Test(timeout = 30000) public void testSplitIntoSubFragmentsWithDifferentFragmentBoundaries() throws Exception { LedgerMetadata metadata = new LedgerMetadata(3, 3, 3, TEST_DIGEST_TYPE, TEST_PSSWD) { @Override ArrayList getEnsemble(long entryId) { return null; } @Override public boolean isClosed() { return true; } }; LedgerHandle lh = new LedgerHandle(bkc, 0, metadata, TEST_DIGEST_TYPE, TEST_PSSWD); testSplitIntoSubFragments(10, 21, -1, 1, lh); testSplitIntoSubFragments(10, 21, 20, 1, lh); testSplitIntoSubFragments(0, 0, 10, 1, lh); testSplitIntoSubFragments(0, 1, 1, 2, lh); testSplitIntoSubFragments(20, 24, 2, 3, lh); testSplitIntoSubFragments(21, 32, 3, 4, lh); testSplitIntoSubFragments(22, 103, 11, 8, lh); testSplitIntoSubFragments(49, 51, 1, 3, lh); testSplitIntoSubFragments(11, 101, 3, 31, lh); } /** assert the sub-fragment boundaries */ void testSplitIntoSubFragments(final long oriFragmentFirstEntry, final long oriFragmentLastEntry, long entriesPerSubFragment, long expectedSubFragments, LedgerHandle lh) { LedgerFragment fr = new LedgerFragment(lh, oriFragmentFirstEntry, oriFragmentLastEntry, 0) { @Override public long getLastStoredEntryId() { return oriFragmentLastEntry; } @Override public long getFirstStoredEntryId() { return oriFragmentFirstEntry; } }; Set subFragments = LedgerFragmentReplicator .splitIntoSubFragments(lh, fr, entriesPerSubFragment); assertEquals(expectedSubFragments, subFragments.size()); int fullSubFragment = 0; int partialSubFragment = 0; for (LedgerFragment ledgerFragment : subFragments) { if ((ledgerFragment.getLastKnownEntryId() - ledgerFragment.getFirstEntryId() + 1) == entriesPerSubFragment) { fullSubFragment++; } else { long totalEntriesToReplicate = oriFragmentLastEntry - oriFragmentFirstEntry + 1; if (entriesPerSubFragment <= 0 || totalEntriesToReplicate / entriesPerSubFragment == 0) { assertEquals( "FirstEntryId should be same as original fragment's firstEntryId", fr.getFirstEntryId(), ledgerFragment .getFirstEntryId()); assertEquals( "LastEntryId should be same as original fragment's lastEntryId", fr.getLastKnownEntryId(), ledgerFragment .getLastKnownEntryId()); } else { long partialSplitEntries = totalEntriesToReplicate % entriesPerSubFragment; assertEquals( "Partial fragment with wrong entry boundaries", ledgerFragment.getLastKnownEntryId() - ledgerFragment.getFirstEntryId() + 1, partialSplitEntries); } partialSubFragment++; } } assertEquals("Unexpected number of sub fargments", fullSubFragment + partialSubFragment, expectedSubFragments); assertTrue("There should be only one or zero partial sub Fragment", partialSubFragment == 0 || partialSubFragment == 1); } private Set getFragmentsToReplicate(LedgerHandle lh) throws InterruptedException { LedgerChecker checker = new LedgerChecker(bkc); CheckerCallback cb = new CheckerCallback(); checker.checkLedger(lh, cb); Set fragments = cb.waitAndGetResult(); return fragments; } private void verifyRecoveredLedgers(LedgerHandle lh, long startEntryId, long endEntryId) throws BKException, InterruptedException { LedgerHandle lhs = bkc.openLedgerNoRecovery(lh.getId(), TEST_DIGEST_TYPE, TEST_PSSWD); Enumeration entries = lhs.readEntries(startEntryId, endEntryId); assertTrue("Should have the elements", entries.hasMoreElements()); while (entries.hasMoreElements()) { LedgerEntry entry = entries.nextElement(); assertEquals("TestLedgerFragmentReplication", new String(entry .getEntry())); } } } TestReadTimeout.java000066400000000000000000000072301244507361200346760ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import org.junit.*; import java.net.InetSocketAddress; import java.util.Enumeration; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import java.util.HashSet; import java.util.Set; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This unit test tests ledger fencing; * */ public class TestReadTimeout extends BookKeeperClusterTestCase { static Logger LOG = LoggerFactory.getLogger(TestReadTimeout.class); DigestType digestType; public TestReadTimeout() { super(10); this.digestType = DigestType.CRC32; } @SuppressWarnings("deprecation") @Test(timeout=60000) public void testReadTimeout() throws Exception { final AtomicBoolean completed = new AtomicBoolean(false); LedgerHandle writelh = bkc.createLedger(3,3,digestType, "testPasswd".getBytes()); String tmp = "Foobar"; final int numEntries = 10; for (int i = 0; i < numEntries; i++) { writelh.addEntry(tmp.getBytes()); } Set beforeSet = new HashSet(); for (InetSocketAddress addr : writelh.getLedgerMetadata().getEnsemble(numEntries)) { beforeSet.add(addr); } final InetSocketAddress bookieToSleep = writelh.getLedgerMetadata().getEnsemble(numEntries).get(0); int sleeptime = baseClientConf.getReadTimeout()*3; CountDownLatch latch = sleepBookie(bookieToSleep, sleeptime); latch.await(); writelh.asyncAddEntry(tmp.getBytes(), new AddCallback() { public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { completed.set(true); } }, null); Thread.sleep((baseClientConf.getReadTimeout()*3)*1000); Assert.assertTrue("Write request did not finish", completed.get()); Set afterSet = new HashSet(); for (InetSocketAddress addr : writelh.getLedgerMetadata().getEnsemble(numEntries+1)) { afterSet.add(addr); } beforeSet.removeAll(afterSet); Assert.assertTrue("Bookie set should not match", beforeSet.size() != 0); } } TestSpeculativeRead.java000066400000000000000000000324441244507361200355410ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/clientpackage org.apache.bookkeeper.client; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import org.junit.*; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Set; import java.util.HashSet; import java.util.Enumeration; import java.util.concurrent.TimeUnit; import java.util.concurrent.CountDownLatch; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.AsyncCallback.ReadCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.test.BaseTestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This unit test tests ledger fencing; * */ public class TestSpeculativeRead extends BaseTestCase { static Logger LOG = LoggerFactory.getLogger(TestSpeculativeRead.class); DigestType digestType; byte[] passwd = "specPW".getBytes(); public TestSpeculativeRead(DigestType digestType) { super(10); this.digestType = digestType; } long getLedgerToRead(int ensemble, int quorum) throws Exception { byte[] data = "Data for test".getBytes(); LedgerHandle l = bkc.createLedger(ensemble, quorum, digestType, passwd); for (int i = 0; i < 10; i++) { l.addEntry(data); } l.close(); return l.getId(); } @SuppressWarnings("deprecation") BookKeeper createClient(int specTimeout) throws Exception { ClientConfiguration conf = new ClientConfiguration() .setSpeculativeReadTimeout(specTimeout) .setReadTimeout(30000); conf.setZkServers(zkUtil.getZooKeeperConnectString()); return new BookKeeper(conf); } class LatchCallback implements ReadCallback { CountDownLatch l = new CountDownLatch(1); boolean success = false; long startMillis = System.currentTimeMillis(); long endMillis = Long.MAX_VALUE; public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { endMillis = System.currentTimeMillis(); LOG.debug("Got response {} {}", rc, getDuration()); success = rc == BKException.Code.OK; l.countDown(); } long getDuration() { return endMillis - startMillis; } void expectSuccess(int milliseconds) throws Exception { assertTrue(l.await(milliseconds, TimeUnit.MILLISECONDS)); assertTrue(success); } void expectFail(int milliseconds) throws Exception { assertTrue(l.await(milliseconds, TimeUnit.MILLISECONDS)); assertFalse(success); } void expectTimeout(int milliseconds) throws Exception { assertFalse(l.await(milliseconds, TimeUnit.MILLISECONDS)); } } /** * Test basic speculative functionality. * - Create 2 clients with read timeout disabled, one with spec * read enabled, the other not. * - create ledger * - sleep second bookie in ensemble * - read first entry, both should find on first bookie. * - read second bookie, spec client should find on bookie three, * non spec client should hang. */ @Test(timeout=60000) public void testSpeculativeRead() throws Exception { long id = getLedgerToRead(3,2); BookKeeper bknospec = createClient(0); // disabled BookKeeper bkspec = createClient(2000); LedgerHandle lnospec = bknospec.openLedger(id, digestType, passwd); LedgerHandle lspec = bkspec.openLedger(id, digestType, passwd); // sleep second bookie CountDownLatch sleepLatch = new CountDownLatch(1); InetSocketAddress second = lnospec.getLedgerMetadata().getEnsembles().get(0L).get(1); sleepBookie(second, sleepLatch); try { // read first entry, both go to first bookie, should be fine LatchCallback nospeccb = new LatchCallback(); LatchCallback speccb = new LatchCallback(); lnospec.asyncReadEntries(0, 0, nospeccb, null); lspec.asyncReadEntries(0, 0, speccb, null); nospeccb.expectSuccess(2000); speccb.expectSuccess(2000); // read second entry, both look for second book, spec read client // tries third bookie, nonspec client hangs as read timeout is very long. nospeccb = new LatchCallback(); speccb = new LatchCallback(); lnospec.asyncReadEntries(1, 1, nospeccb, null); lspec.asyncReadEntries(1, 1, speccb, null); speccb.expectSuccess(4000); nospeccb.expectTimeout(4000); } finally { sleepLatch.countDown(); lspec.close(); lnospec.close(); bkspec.close(); bknospec.close(); } } /** * Test that if more than one replica is down, we can still read, as long as the quorum * size is larger than the number of down replicas. */ @Test(timeout=60000) public void testSpeculativeReadMultipleReplicasDown() throws Exception { long id = getLedgerToRead(5,5); int timeout = 5000; BookKeeper bkspec = createClient(timeout); LedgerHandle l = bkspec.openLedger(id, digestType, passwd); // sleep bookie 1, 2 & 4 CountDownLatch sleepLatch = new CountDownLatch(1); sleepBookie(l.getLedgerMetadata().getEnsembles().get(0L).get(1), sleepLatch); sleepBookie(l.getLedgerMetadata().getEnsembles().get(0L).get(2), sleepLatch); sleepBookie(l.getLedgerMetadata().getEnsembles().get(0L).get(4), sleepLatch); try { // read first entry, should complete faster than timeout // as bookie 0 has the entry LatchCallback latch0 = new LatchCallback(); l.asyncReadEntries(0, 0, latch0, null); latch0.expectSuccess(timeout/2); // second should have to hit two timeouts (bookie 1 & 2) // bookie 3 has the entry LatchCallback latch1 = new LatchCallback(); l.asyncReadEntries(1, 1, latch1, null); latch1.expectTimeout(timeout); latch1.expectSuccess(timeout*2); LOG.info("Timeout {} latch1 duration {}", timeout, latch1.getDuration()); assertTrue("should have taken longer than two timeouts, but less than 3", latch1.getDuration() >= timeout*2 && latch1.getDuration() < timeout*3); // third should have to hit one timeouts (bookie 2) // bookie 3 has the entry LatchCallback latch2 = new LatchCallback(); l.asyncReadEntries(2, 2, latch2, null); latch2.expectTimeout(timeout/2); latch2.expectSuccess(timeout); LOG.info("Timeout {} latch2 duration {}", timeout, latch2.getDuration()); assertTrue("should have taken longer than one timeout, but less than 2", latch2.getDuration() >= timeout && latch2.getDuration() < timeout*2); // fourth should have no timeout // bookie 3 has the entry LatchCallback latch3 = new LatchCallback(); l.asyncReadEntries(3, 3, latch3, null); latch3.expectSuccess(timeout/2); // fifth should hit one timeout, (bookie 4) // bookie 0 has the entry LatchCallback latch4 = new LatchCallback(); l.asyncReadEntries(4, 4, latch4, null); latch4.expectTimeout(timeout/2); latch4.expectSuccess(timeout); LOG.info("Timeout {} latch4 duration {}", timeout, latch4.getDuration()); assertTrue("should have taken longer than one timeout, but less than 2", latch4.getDuration() >= timeout && latch4.getDuration() < timeout*2); } finally { sleepLatch.countDown(); l.close(); bkspec.close(); } } /** * Test that if after a speculative read is kicked off, the original read completes * nothing bad happens. */ @Test(timeout=60000) public void testSpeculativeReadFirstReadCompleteIsOk() throws Exception { long id = getLedgerToRead(2,2); int timeout = 1000; BookKeeper bkspec = createClient(timeout); LedgerHandle l = bkspec.openLedger(id, digestType, passwd); // sleep bookies CountDownLatch sleepLatch0 = new CountDownLatch(1); CountDownLatch sleepLatch1 = new CountDownLatch(1); sleepBookie(l.getLedgerMetadata().getEnsembles().get(0L).get(0), sleepLatch0); sleepBookie(l.getLedgerMetadata().getEnsembles().get(0L).get(1), sleepLatch1); try { // read goes to first bookie, spec read timeout occurs, // goes to second LatchCallback latch0 = new LatchCallback(); l.asyncReadEntries(0, 0, latch0, null); latch0.expectTimeout(timeout); // wake up first bookie sleepLatch0.countDown(); latch0.expectSuccess(timeout/2); sleepLatch1.countDown(); // check we can read next entry without issue LatchCallback latch1 = new LatchCallback(); l.asyncReadEntries(1, 1, latch1, null); latch1.expectSuccess(timeout/2); } finally { sleepLatch0.countDown(); sleepLatch1.countDown(); l.close(); bkspec.close(); } } /** * Unit test for the speculative read scheduling method */ @Test(timeout=60000) public void testSpeculativeReadScheduling() throws Exception { long id = getLedgerToRead(3,2); int timeout = 1000; BookKeeper bkspec = createClient(timeout); LedgerHandle l = bkspec.openLedger(id, digestType, passwd); ArrayList ensemble = l.getLedgerMetadata().getEnsembles().get(0L); Set allHosts = new HashSet(ensemble); Set noHost = new HashSet(); Set secondHostOnly = new HashSet(); secondHostOnly.add(ensemble.get(1)); PendingReadOp.LedgerEntryRequest req0 = null, req2 = null, req4 = null; try { LatchCallback latch0 = new LatchCallback(); PendingReadOp op = new PendingReadOp(l, bkspec.scheduler, 0, 5, latch0, null); // if we've already heard from all hosts, // we only send the initial read req0 = op.new LedgerEntryRequest(ensemble, l.getId(), 0); assertTrue("Should have sent to first", req0.maybeSendSpeculativeRead(allHosts).equals(ensemble.get(0))); assertNull("Should not have sent another", req0.maybeSendSpeculativeRead(allHosts)); // if we have heard from some hosts, but not one we have sent to // send again req2 = op.new LedgerEntryRequest(ensemble, l.getId(), 2); assertTrue("Should have sent to third", req2.maybeSendSpeculativeRead(noHost).equals(ensemble.get(2))); assertTrue("Should have sent to first", req2.maybeSendSpeculativeRead(secondHostOnly).equals(ensemble.get(0))); // if we have heard from some hosts, which includes one we sent to // do not read again req4 = op.new LedgerEntryRequest(ensemble, l.getId(), 4); assertTrue("Should have sent to second", req4.maybeSendSpeculativeRead(noHost).equals(ensemble.get(1))); assertNull("Should not have sent another", req4.maybeSendSpeculativeRead(secondHostOnly)); } finally { for (PendingReadOp.LedgerEntryRequest req : new PendingReadOp.LedgerEntryRequest[] { req0, req2, req4 }) { if (req != null) { int i = 0; while (!req.isComplete()) { if (i++ > 10) { break; // wait for up to 10 seconds } Thread.sleep(1000); } assertTrue("Request should be done", req0.isComplete()); } } l.close(); bkspec.close(); } } }TestWatchEnsembleChange.java000066400000000000000000000133101244507361200362770ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/client/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.client; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.meta.FlatLedgerManagerFactory; import org.apache.bookkeeper.meta.HierarchicalLedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.MSLedgerManagerFactory; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.LedgerMetadataListener; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.util.ReflectionUtils; import org.apache.bookkeeper.versioning.Version; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; @RunWith(Parameterized.class) public class TestWatchEnsembleChange extends BookKeeperClusterTestCase { static Logger LOG = LoggerFactory.getLogger(TestWatchEnsembleChange.class); final DigestType digestType; final Class lmFactoryCls; public TestWatchEnsembleChange(Class lmFactoryCls) { super(7); this.digestType = DigestType.CRC32; this.lmFactoryCls = lmFactoryCls; baseClientConf.setLedgerManagerFactoryClass(lmFactoryCls); baseConf.setLedgerManagerFactoryClass(lmFactoryCls); } @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { FlatLedgerManagerFactory.class }, { HierarchicalLedgerManagerFactory.class }, { MSLedgerManagerFactory.class } }); } @Test(timeout = 60000) public void testWatchEnsembleChange() throws Exception { int numEntries = 10; LedgerHandle lh = bkc.createLedger(3, 3, 3, digestType, "".getBytes()); for (int i=0; i ensemble = lh.getLedgerMetadata().currentEnsemble; for (InetSocketAddress addr : ensemble) { killBookie(addr); } // write another batch of entries, which will trigger ensemble change for (int i=0; i(){ @Override public void operationComplete(int rc, Long result) { bbLedgerId.putLong(result); bbLedgerId.flip(); createLatch.countDown(); } }); assertTrue(createLatch.await(2000, TimeUnit.MILLISECONDS)); final long createdLid = bbLedgerId.getLong(); manager.registerLedgerMetadataListener( createdLid, new LedgerMetadataListener() { @Override public void onChanged( long ledgerId, LedgerMetadata metadata ) { assertEquals(ledgerId, createdLid); assertEquals(metadata, null); removeLatch.countDown(); } }); manager.removeLedgerMetadata( createdLid, Version.ANY, new BookkeeperInternalCallbacks.GenericCallback() { @Override public void operationComplete(int rc, Void result) { assertEquals(rc, BKException.Code.OK); } }); assertTrue(removeLatch.await(2000, TimeUnit.MILLISECONDS)); } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/meta/000077500000000000000000000000001244507361200304765ustar00rootroot00000000000000GcLedgersTest.java000066400000000000000000000244401244507361200337650ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.meta; import java.util.ArrayList; import java.util.Collections; import java.util.HashSet; import java.util.LinkedList; import java.util.List; import java.util.Queue; import java.util.Random; import java.util.Set; import java.util.SortedSet; import java.util.TreeSet; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.bookie.GarbageCollector; import org.apache.bookkeeper.bookie.ScanAndCompareGarbageCollector; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.meta.LedgerManager.LedgerRange; import org.apache.bookkeeper.meta.LedgerManager.LedgerRangeIterator; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.versioning.Version; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Test garbage collection ledgers in ledger manager */ public class GcLedgersTest extends LedgerManagerTestCase { static final Logger LOG = LoggerFactory.getLogger(GcLedgersTest.class); public GcLedgersTest(Class lmFactoryCls) { super(lmFactoryCls); } /** * Create ledgers */ private void createLedgers(int numLedgers, final Set createdLedgers) { final AtomicInteger expected = new AtomicInteger(numLedgers); for (int i=0; i() { @Override public void operationComplete(int rc, Long ledgerId) { if (rc == BKException.Code.OK) { activeLedgers.put(ledgerId, true); createdLedgers.add(ledgerId); } synchronized (expected) { int num = expected.decrementAndGet(); if (num == 0) { expected.notify(); } } } }); } synchronized (expected) { try { while (expected.get() > 0) { expected.wait(100); } } catch (InterruptedException ie) { } } } private void removeLedger(long ledgerId) throws Exception { final AtomicInteger rc = new AtomicInteger(0); final CountDownLatch latch = new CountDownLatch(1); getLedgerManager().removeLedgerMetadata(ledgerId, Version.ANY, new GenericCallback() { @Override public void operationComplete(int rc2, Void result) { rc.set(rc2); latch.countDown(); } }); assertTrue(latch.await(10, TimeUnit.SECONDS)); assertEquals("Remove should have succeeded", 0, rc.get()); } @Test(timeout=60000) public void testGarbageCollectLedgers() throws Exception { int numLedgers = 100; int numRemovedLedgers = 10; final Set createdLedgers = new HashSet(); final Set removedLedgers = new HashSet(); // create 100 ledgers createLedgers(numLedgers, createdLedgers); Random r = new Random(System.currentTimeMillis()); final List tmpList = new ArrayList(); tmpList.addAll(createdLedgers); Collections.shuffle(tmpList, r); // random remove several ledgers for (int i=0; i() { @Override public void operationComplete(int rc, Void result) { synchronized (removedLedgers) { removedLedgers.notify(); } } }); removedLedgers.wait(); } removedLedgers.add(ledgerId); createdLedgers.remove(ledgerId); } final CountDownLatch inGcProgress = new CountDownLatch(1); final CountDownLatch createLatch = new CountDownLatch(1); final CountDownLatch endLatch = new CountDownLatch(2); final GarbageCollector garbageCollector = new ScanAndCompareGarbageCollector(getLedgerManager(), activeLedgers); Thread gcThread = new Thread() { @Override public void run() { garbageCollector.gc(new GarbageCollector.GarbageCleaner() { boolean paused = false; @Override public void clean(long ledgerId) { if (!paused) { inGcProgress.countDown(); try { createLatch.await(); } catch (InterruptedException ie) { } paused = true; } LOG.info("Garbage Collected ledger {}", ledgerId); } }); LOG.info("Gc Thread quits."); endLatch.countDown(); } }; Thread createThread = new Thread() { @Override public void run() { try { inGcProgress.await(); // create 10 more ledgers createLedgers(10, createdLedgers); LOG.info("Finished creating 10 more ledgers."); createLatch.countDown(); } catch (Exception e) { } LOG.info("Create Thread quits."); endLatch.countDown(); } }; createThread.start(); gcThread.start(); endLatch.await(); // test ledgers for (Long ledger : removedLedgers) { assertFalse(activeLedgers.containsKey(ledger)); } for (Long ledger : createdLedgers) { assertTrue(activeLedgers.containsKey(ledger)); } } @Test(timeout=60000) public void testGcLedgersOutsideRange() throws Exception { final SortedSet createdLedgers = Collections.synchronizedSortedSet(new TreeSet()); final Queue cleaned = new LinkedList(); int numLedgers = 100; createLedgers(numLedgers, createdLedgers); final GarbageCollector garbageCollector = new ScanAndCompareGarbageCollector(getLedgerManager(), activeLedgers); GarbageCollector.GarbageCleaner cleaner = new GarbageCollector.GarbageCleaner() { @Override public void clean(long ledgerId) { LOG.info("Cleaned {}", ledgerId); cleaned.add(ledgerId); } }; garbageCollector.gc(cleaner); assertNull("Should have cleaned nothing", cleaned.poll()); long last = createdLedgers.last(); removeLedger(last); garbageCollector.gc(cleaner); assertNotNull("Should have cleaned something", cleaned.peek()); assertEquals("Should have cleaned last ledger" + last, (long)last, (long)cleaned.poll()); long first = createdLedgers.first(); removeLedger(first); garbageCollector.gc(cleaner); assertNotNull("Should have cleaned something", cleaned.peek()); assertEquals("Should have cleaned first ledger" + first, (long)first, (long)cleaned.poll()); } @Test(timeout=60000) public void testGcLedgersNotLast() throws Exception { final SortedSet createdLedgers = Collections.synchronizedSortedSet(new TreeSet()); final List cleaned = new ArrayList(); // Create enough ledgers to span over 4 ranges in the hierarchical ledger manager implementation final int numLedgers = 30001; createLedgers(numLedgers, createdLedgers); final GarbageCollector garbageCollector = new ScanAndCompareGarbageCollector(getLedgerManager(), activeLedgers); GarbageCollector.GarbageCleaner cleaner = new GarbageCollector.GarbageCleaner() { @Override public void clean(long ledgerId) { LOG.info("Cleaned {}", ledgerId); cleaned.add(ledgerId); } }; SortedSet scannedLedgers = new TreeSet(); LedgerRangeIterator iterator = getLedgerManager().getLedgerRanges(); while (iterator.hasNext()) { LedgerRange ledgerRange = iterator.next(); scannedLedgers.addAll(ledgerRange.getLedgers()); } assertEquals(createdLedgers, scannedLedgers); garbageCollector.gc(cleaner); assertTrue("Should have cleaned nothing", cleaned.isEmpty()); long first = createdLedgers.first(); removeLedger(first); garbageCollector.gc(cleaner); assertEquals("Should have cleaned something", 1, cleaned.size()); assertEquals("Should have cleaned first ledger" + first, (long)first, (long)cleaned.get(0)); } } LedgerLayoutTest.java000066400000000000000000000136601244507361200345300ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/meta/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.meta; import java.io.IOException; import java.lang.reflect.Field; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.util.BookKeeperConstants; import org.junit.Test; public class LedgerLayoutTest extends BookKeeperClusterTestCase { public LedgerLayoutTest() { super(0); } @Test(timeout=60000) public void testLedgerLayout() throws Exception { ClientConfiguration conf = new ClientConfiguration(); conf.setLedgerManagerFactoryClass(HierarchicalLedgerManagerFactory.class); String ledgerRootPath = "/testLedgerLayout"; zkc.create(ledgerRootPath, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); LedgerLayout layout = LedgerLayout.readLayout(zkc, ledgerRootPath); assertTrue("Layout should be null", layout == null); String testName = "foobar"; int testVersion = 0xdeadbeef; // use layout defined in configuration also create it in zookeeper LedgerLayout layout2 = new LedgerLayout(testName, testVersion); layout2.store(zkc, ledgerRootPath); layout = LedgerLayout.readLayout(zkc, ledgerRootPath); assertEquals(testName, layout.getManagerType()); assertEquals(testVersion, layout.getManagerVersion()); } private void writeLedgerLayout( String ledgersRootPath, String managerType, int managerVersion, int layoutVersion) throws Exception { LedgerLayout layout = new LedgerLayout(managerType, managerVersion); Field f = LedgerLayout.class.getDeclaredField("layoutFormatVersion"); f.setAccessible(true); f.set(layout, layoutVersion); layout.store(zkc, ledgersRootPath); } @Test(timeout=60000) public void testBadVersionLedgerLayout() throws Exception { ClientConfiguration conf = new ClientConfiguration(); // write bad version ledger layout writeLedgerLayout(conf.getZkLedgersRootPath(), FlatLedgerManagerFactory.class.getName(), FlatLedgerManagerFactory.CUR_VERSION, LedgerLayout.LAYOUT_FORMAT_VERSION + 1); try { LedgerLayout.readLayout(zkc, conf.getZkLedgersRootPath()); fail("Shouldn't reach here!"); } catch (IOException ie) { assertTrue("Invalid exception", ie.getMessage().contains("version not compatible")); } } @Test(timeout=60000) public void testAbsentLedgerManagerLayout() throws Exception { ClientConfiguration conf = new ClientConfiguration(); String ledgersLayout = conf.getZkLedgersRootPath() + "/" + BookKeeperConstants.LAYOUT_ZNODE; // write bad format ledger layout StringBuilder sb = new StringBuilder(); sb.append(LedgerLayout.LAYOUT_FORMAT_VERSION).append("\n"); zkc.create(ledgersLayout, sb.toString().getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); try { LedgerLayout.readLayout(zkc, conf.getZkLedgersRootPath()); fail("Shouldn't reach here!"); } catch (IOException ie) { assertTrue("Invalid exception", ie.getMessage().contains("version absent from")); } } @Test(timeout=60000) public void testBaseLedgerManagerLayout() throws Exception { ClientConfiguration conf = new ClientConfiguration(); String rootPath = conf.getZkLedgersRootPath(); String ledgersLayout = rootPath + "/" + BookKeeperConstants.LAYOUT_ZNODE; // write bad format ledger layout StringBuilder sb = new StringBuilder(); sb.append(LedgerLayout.LAYOUT_FORMAT_VERSION).append("\n") .append(FlatLedgerManagerFactory.class.getName()); zkc.create(ledgersLayout, sb.toString().getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); try { LedgerLayout.readLayout(zkc, rootPath); fail("Shouldn't reach here!"); } catch (IOException ie) { assertTrue("Invalid exception", ie.getMessage().contains("Invalid Ledger Manager")); } } @Test(timeout=60000) public void testReadV1LedgerManagerLayout() throws Exception { ClientConfiguration conf = new ClientConfiguration(); // write v1 ledger layout writeLedgerLayout(conf.getZkLedgersRootPath(), FlatLedgerManagerFactory.NAME, FlatLedgerManagerFactory.CUR_VERSION, 1); LedgerLayout layout = LedgerLayout.readLayout(zkc, conf.getZkLedgersRootPath()); assertNotNull("Should not be null", layout); assertEquals(FlatLedgerManagerFactory.NAME, layout.getManagerType()); assertEquals(FlatLedgerManagerFactory.CUR_VERSION, layout.getManagerVersion()); assertEquals(1, layout.getLayoutFormatVersion()); } } LedgerManagerIteratorTest.java000066400000000000000000000027431244507361200363370ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.meta; import org.apache.bookkeeper.meta.LedgerManager.LedgerRangeIterator; import org.junit.Test; public class LedgerManagerIteratorTest extends LedgerManagerTestCase { public LedgerManagerIteratorTest(Class lmFactoryCls) { super(lmFactoryCls); } @Test(timeout = 60000) public void testIterateNoLedgers() throws Exception { LedgerManager lm = getLedgerManager(); LedgerRangeIterator lri = lm.getLedgerRanges(); assertNotNull(lri); if (lri.hasNext()) lri.next(); assertEquals(false, lri.hasNext()); assertEquals(false, lri.hasNext()); } } LedgerManagerTestCase.java000066400000000000000000000053311244507361200354150ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.meta; import java.util.Arrays; import java.util.Collection; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.util.SnapshotMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.After; import org.junit.Before; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; /** * Test case to run over serveral ledger managers */ @RunWith(Parameterized.class) public abstract class LedgerManagerTestCase extends BookKeeperClusterTestCase { static final Logger LOG = LoggerFactory.getLogger(LedgerManagerTestCase.class); LedgerManagerFactory ledgerManagerFactory; LedgerManager ledgerManager = null; SnapshotMap activeLedgers = null; public LedgerManagerTestCase(Class lmFactoryCls) { super(0); activeLedgers = new SnapshotMap(); baseConf.setLedgerManagerFactoryClass(lmFactoryCls); } public LedgerManager getLedgerManager() { if (null == ledgerManager) { ledgerManager = ledgerManagerFactory.newLedgerManager(); } return ledgerManager; } @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { FlatLedgerManagerFactory.class }, { HierarchicalLedgerManagerFactory.class }, { MSLedgerManagerFactory.class } }); } @Before @Override public void setUp() throws Exception { super.setUp(); ledgerManagerFactory = LedgerManagerFactory.newLedgerManagerFactory(baseConf, zkc); } @After @Override public void tearDown() throws Exception { if (null != ledgerManager) { ledgerManager.close(); } ledgerManagerFactory.uninitialize(); super.tearDown(); } } TestLedgerManager.java000066400000000000000000000266661244507361200346370ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.meta; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs.Ids; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.CountDownLatch; import java.util.List; import java.util.ArrayList; import java.lang.reflect.Field; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.junit.Assert.*; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TestLedgerManager extends BookKeeperClusterTestCase { static Logger LOG = LoggerFactory.getLogger(TestLedgerManager.class); public TestLedgerManager() { super(0); } private void writeLedgerLayout(String ledgersRootPath, String managerType, int managerVersion, int layoutVersion) throws Exception { LedgerLayout layout = new LedgerLayout(managerType, managerVersion); Field f = LedgerLayout.class.getDeclaredField("layoutFormatVersion"); f.setAccessible(true); f.set(layout, layoutVersion); layout.store(zkc, ledgersRootPath); } /** * Test bad client configuration */ @Test(timeout=60000) public void testBadConf() throws Exception { ClientConfiguration conf = new ClientConfiguration(); // success case String root0 = "/goodconf0"; zkc.create(root0, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); conf.setZkLedgersRootPath(root0); LedgerManagerFactory m = LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); assertTrue("Ledger manager is unexpected type", (m instanceof FlatLedgerManagerFactory)); m.uninitialize(); // mismatching conf conf.setLedgerManagerFactoryClass(HierarchicalLedgerManagerFactory.class); try { LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); fail("Shouldn't reach here"); } catch (Exception e) { LOG.error("Received exception", e); assertTrue("Invalid exception", e.getMessage().contains("does not match existing layout")); } // invalid ledger manager String root1 = "/badconf1"; zkc.create(root1, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); conf.setZkLedgersRootPath(root1); conf.setLedgerManagerFactoryClassName("DoesNotExist"); try { LedgerManagerFactory f = LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); fail("Shouldn't reach here"); } catch (Exception e) { LOG.error("Received exception", e); assertTrue("Invalid exception", e.getMessage().contains("Failed to get ledger manager factory class from configuration")); } } /** * Test bad client configuration */ @Test(timeout=60000) public void testBadConfV1() throws Exception { ClientConfiguration conf = new ClientConfiguration(); String root0 = "/goodconf0"; zkc.create(root0, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); conf.setZkLedgersRootPath(root0); // write v1 layout writeLedgerLayout(root0, FlatLedgerManagerFactory.NAME, FlatLedgerManagerFactory.CUR_VERSION, 1); conf.setLedgerManagerFactoryClass(FlatLedgerManagerFactory.class); LedgerManagerFactory m = LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); assertTrue("Ledger manager is unexpected type", (m instanceof FlatLedgerManagerFactory)); m.uninitialize(); // v2 setting doesn't effect v1 conf.setLedgerManagerFactoryClass(HierarchicalLedgerManagerFactory.class); m = LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); assertTrue("Ledger manager is unexpected type", (m instanceof FlatLedgerManagerFactory)); m.uninitialize(); // mismatching conf conf.setLedgerManagerType(HierarchicalLedgerManagerFactory.NAME); try { LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); fail("Shouldn't reach here"); } catch (Exception e) { LOG.error("Received exception", e); assertTrue("Invalid exception", e.getMessage().contains("does not match existing layout")); } } /** * Test bad zk configuration */ @Test(timeout=60000) public void testBadZkContents() throws Exception { ClientConfiguration conf = new ClientConfiguration(); // bad type in zookeeper String root0 = "/badzk0"; zkc.create(root0, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); conf.setZkLedgersRootPath(root0); new LedgerLayout("DoesNotExist", 0xdeadbeef).store(zkc, root0); try { LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); fail("Shouldn't reach here"); } catch (Exception e) { LOG.error("Received exception", e); assertTrue("Invalid exception", e.getMessage().contains("Failed to instantiate ledger manager factory")); } // bad version in zookeeper String root1 = "/badzk1"; zkc.create(root1, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); conf.setZkLedgersRootPath(root1); new LedgerLayout(FlatLedgerManagerFactory.class.getName(), 0xdeadbeef).store(zkc, root1); try { LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); fail("Shouldn't reach here"); } catch (Exception e) { LOG.error("Received exception", e); assertTrue("Invalid exception", e.getMessage().contains("Incompatible layout version found")); } } private static class CreateLMThread extends Thread { private boolean success = false; private final String factoryCls; private final String root; private final CyclicBarrier barrier; private ZooKeeper zkc; CreateLMThread(String zkConnectString, String root, String factoryCls, CyclicBarrier barrier) throws Exception { this.factoryCls = factoryCls; this.barrier = barrier; this.root = root; final CountDownLatch latch = new CountDownLatch(1); zkc = new ZooKeeper(zkConnectString, 10000, new Watcher() { public void process(WatchedEvent event) { latch.countDown(); } }); latch.await(); } public void run() { ClientConfiguration conf = new ClientConfiguration(); conf.setLedgerManagerFactoryClassName(factoryCls); try { barrier.await(); LedgerManagerFactory factory = LedgerManagerFactory.newLedgerManagerFactory(conf, zkc); factory.uninitialize(); success = true; } catch (Exception e) { LOG.error("Failed to create ledger manager", e); } } public boolean isSuccessful() { return success; } public void close() throws Exception { zkc.close(); } } // test concurrent @Test(timeout=60000) public void testConcurrent1() throws Exception { /// everyone creates the same int numThreads = 50; // bad version in zookeeper String root0 = "/lmroot0"; zkc.create(root0, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); CyclicBarrier barrier = new CyclicBarrier(numThreads+1); List threads = new ArrayList(numThreads); for (int i = 0; i < numThreads; i++) { CreateLMThread t = new CreateLMThread(zkUtil.getZooKeeperConnectString(), root0, FlatLedgerManagerFactory.class.getName(), barrier); t.start(); threads.add(t); } barrier.await(); boolean success = true; for (CreateLMThread t : threads) { t.join(); t.close(); success = t.isSuccessful() && success; } assertTrue("Not all ledger managers created", success); } @Test(timeout=60000) public void testConcurrent2() throws Exception { /// odd create different int numThreadsEach = 25; // bad version in zookeeper String root0 = "/lmroot0"; zkc.create(root0, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); CyclicBarrier barrier = new CyclicBarrier(numThreadsEach*2+1); List threadsA = new ArrayList(numThreadsEach); for (int i = 0; i < numThreadsEach; i++) { CreateLMThread t = new CreateLMThread(zkUtil.getZooKeeperConnectString(), root0, FlatLedgerManagerFactory.class.getName(), barrier); t.start(); threadsA.add(t); } List threadsB = new ArrayList(numThreadsEach); for (int i = 0; i < numThreadsEach; i++) { CreateLMThread t = new CreateLMThread(zkUtil.getZooKeeperConnectString(), root0, HierarchicalLedgerManagerFactory.class.getName(), barrier); t.start(); threadsB.add(t); } barrier.await(); int numSuccess = 0; int numFails = 0; for (CreateLMThread t : threadsA) { t.join(); t.close(); if (t.isSuccessful()) { numSuccess++; } else { numFails++; } } for (CreateLMThread t : threadsB) { t.join(); t.close(); if (t.isSuccessful()) { numSuccess++; } else { numFails++; } } assertEquals("Incorrect number of successes", numThreadsEach, numSuccess); assertEquals("Incorrect number of failures", numThreadsEach, numFails); } } TestZkVersion.java000066400000000000000000000043351244507361200340610ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.meta; import org.junit.Test; import org.junit.Assert; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Version.Occurred; public class TestZkVersion { @Test(timeout=60000) public void testNullZkVersion() { ZkVersion zkVersion = new ZkVersion(99); try { zkVersion.compare(null); Assert.fail("Should fail comparing with null version."); } catch (NullPointerException npe) { } } @Test(timeout=60000) public void testInvalidVersion() { ZkVersion zkVersion = new ZkVersion(99); try { zkVersion.compare(new Version() { @Override public Occurred compare(Version v) { return Occurred.AFTER; } }); Assert.fail("Should not reach here!"); } catch (IllegalArgumentException iae) { } } @Test(timeout=60000) public void testCompare() { ZkVersion zv = new ZkVersion(99); Assert.assertEquals(Occurred.AFTER, zv.compare(new ZkVersion(98))); Assert.assertEquals(Occurred.BEFORE, zv.compare(new ZkVersion(100))); Assert.assertEquals(Occurred.CONCURRENTLY, zv.compare(new ZkVersion(99))); Assert.assertEquals(Occurred.CONCURRENTLY, zv.compare(Version.ANY)); Assert.assertEquals(Occurred.AFTER, zv.compare(Version.NEW)); } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/metastore/000077500000000000000000000000001244507361200315535ustar00rootroot00000000000000MetastoreScannableTableAsyncToSyncConverter.java000066400000000000000000000046411244507361200431140ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.util.Set; import org.apache.bookkeeper.metastore.MetastoreScannableTable.Order; public class MetastoreScannableTableAsyncToSyncConverter extends MetastoreTableAsyncToSyncConverter { private MetastoreScannableTable scannableTable; public MetastoreScannableTableAsyncToSyncConverter( MetastoreScannableTable table) { super(table); this.scannableTable = table; } public MetastoreCursor openCursor(String firstKey, boolean firstInclusive, String lastKey, boolean lastInclusive, Order order) throws MSException { HeldValue retValue = new HeldValue(); // make the actual async call this.scannableTable.openCursor(firstKey, firstInclusive, lastKey, lastInclusive, order, retValue, null); retValue.waitCallback(); return retValue.getValue(); } public MetastoreCursor openCursor(String firstKey, boolean firstInclusive, String lastKey, boolean lastInclusive, Order order, Set fields) throws MSException { HeldValue retValue = new HeldValue(); // make the actual async call this.scannableTable.openCursor(firstKey, firstInclusive, lastKey, lastInclusive, order, fields, retValue, null); retValue.waitCallback(); return retValue.getValue(); } } MetastoreTableAsyncToSyncConverter.java000066400000000000000000000104601244507361200413010ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.metastore.MetastoreCallback; import org.apache.bookkeeper.metastore.MetastoreTable; import org.apache.bookkeeper.metastore.MSException; import org.apache.bookkeeper.metastore.MSException.Code; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; // Converts async calls to sync calls for MetastoreTable. Currently not // intended to be used other than for simple functional tests, however, // could be developed into a sync API. public class MetastoreTableAsyncToSyncConverter { static class HeldValue implements MetastoreCallback { private CountDownLatch countDownLatch = new CountDownLatch(1); private int code; private T value = null; void waitCallback() throws MSException { try { countDownLatch.await(10, TimeUnit.SECONDS); } catch (InterruptedException ie) { throw MSException.create(Code.InterruptedException); } if (Code.OK.getCode() != code) { throw MSException.create(Code.get(code)); } } public T getValue() { return value; } @Override public void complete(int rc, T value, Object ctx) { this.code = rc; this.value = value; countDownLatch.countDown(); } } protected MetastoreTable table; public MetastoreTableAsyncToSyncConverter(MetastoreTable table) { this.table = table; } public Versioned get(String key) throws MSException { HeldValue> retValue = new HeldValue>(); // make the actual async call this.table.get(key, retValue, null); retValue.waitCallback(); return retValue.getValue(); } public Versioned get(String key, Set fields) throws MSException { HeldValue> retValue = new HeldValue>(); // make the actual async call this.table.get(key, fields, retValue, null); retValue.waitCallback(); return retValue.getValue(); } public void remove(String key, Version version) throws MSException { HeldValue retValue = new HeldValue(); // make the actual async call this.table.remove(key, version, retValue, null); retValue.waitCallback(); } public Version put(String key, Value value, Version version) throws MSException { HeldValue retValue = new HeldValue(); // make the actual async call this.table.put(key, value, version, retValue, null); retValue.waitCallback(); return retValue.getValue(); } public MetastoreCursor openCursor() throws MSException { HeldValue retValue = new HeldValue(); // make the actual async call this.table.openCursor(retValue, null); retValue.waitCallback(); return retValue.getValue(); } public MetastoreCursor openCursor(Set fields) throws MSException { HeldValue retValue = new HeldValue(); // make the actual async call this.table.openCursor(fields, retValue, null); retValue.waitCallback(); return retValue.getValue(); } } TestMetaStore.java000066400000000000000000000552341244507361200351130ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/metastore/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.metastore; import static org.apache.bookkeeper.metastore.MetastoreScannableTable.EMPTY_END_KEY; import static org.apache.bookkeeper.metastore.MetastoreScannableTable.EMPTY_START_KEY; import static org.apache.bookkeeper.metastore.MetastoreTable.ALL_FIELDS; import static org.apache.bookkeeper.metastore.MetastoreTable.NON_FIELDS; import java.util.Arrays; import java.util.HashSet; import java.util.Iterator; import java.util.Map; import java.util.Set; import java.util.TreeMap; import junit.framework.TestCase; import org.apache.bookkeeper.metastore.InMemoryMetastoreTable.MetadataVersion; import org.apache.bookkeeper.metastore.MSException.Code; import org.apache.bookkeeper.metastore.MetastoreScannableTable.Order; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.commons.configuration.CompositeConfiguration; import org.apache.commons.configuration.Configuration; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.common.collect.MapDifference; import com.google.common.collect.Maps; import com.google.common.collect.Sets; public class TestMetaStore extends TestCase { final static Logger logger = LoggerFactory.getLogger(TestMetaStore.class); protected final static String TABLE = "myTable"; protected final static String RECORDID = "test"; protected final static String FIELD_NAME = "name"; protected final static String FIELD_COUNTER = "counter"; protected String getFieldFromValue(Value value, String field) { byte[] v = value.getField(field); return v == null ? null : new String(v); } protected static Value makeValue(String name, Integer counter) { Value data = new Value(); if (name != null) { data.setField(FIELD_NAME, name.getBytes()); } if (counter != null) { data.setField(FIELD_COUNTER, counter.toString().getBytes()); } return data; } protected class Record { String name; Integer counter; Version version; public Record() { } public Record(String name, Integer counter, Version version) { this.name = name; this.counter = counter; this.version = version; } public Record(Versioned vv) { version = vv.getVersion(); Value value = vv.getValue(); if (value == null) { return; } name = getFieldFromValue(value, FIELD_NAME); String c = getFieldFromValue(value, FIELD_COUNTER); if (c != null) { counter = new Integer(c); } } public Version getVersion() { return version; } public Value getValue() { return TestMetaStore.makeValue(name, counter); } public Versioned getVersionedValue() { return new Versioned(getValue(), version); } public void merge(String name, Integer counter, Version version) { if (name != null) { this.name = name; } if (counter != null) { this.counter = counter; } if (version != null) { this.version = version; } } public void merge(Record record) { merge(record.name, record.counter, record.version); } public void checkEqual(Versioned vv) { Version v = vv.getVersion(); Value value = vv.getValue(); assertEquals(name, getFieldFromValue(value, FIELD_NAME)); String c = getFieldFromValue(value, FIELD_COUNTER); if (counter == null) { assertNull(c); } else { assertEquals(counter.toString(), c); } assertTrue(isEqualVersion(version, v)); } } protected MetaStore metastore; protected MetastoreScannableTable myActualTable; protected MetastoreScannableTableAsyncToSyncConverter myTable; protected String getMetaStoreName() { return InMemoryMetaStore.class.getName(); } protected Configuration getConfiguration() { return new CompositeConfiguration(); } protected Version newBadVersion() { return new MetadataVersion(-1); } protected Version nextVersion(Version version) { if (Version.NEW == version) { return new MetadataVersion(0); } if (Version.ANY == version) { return Version.ANY; } assertTrue(version instanceof MetadataVersion); return new MetadataVersion(((MetadataVersion) version).incrementVersion()); } private void checkVersion(Version v) { assertNotNull(v); if (v != Version.NEW && v != Version.ANY) { assertTrue(v instanceof MetadataVersion); } } protected boolean isEqualVersion(Version v1, Version v2) { checkVersion(v1); checkVersion(v2); return v1.compare(v2) == Version.Occurred.CONCURRENTLY; } @Override @Before public void setUp() throws Exception { metastore = MetastoreFactory.createMetaStore(getMetaStoreName()); Configuration config = getConfiguration(); metastore.init(config, metastore.getVersion()); myActualTable = metastore.createScannableTable(TABLE); myTable = new MetastoreScannableTableAsyncToSyncConverter(myActualTable); // setup a clean environment clearTable(); } @Override @After public void tearDown() throws Exception { // also clear table after test clearTable(); myActualTable.close(); metastore.close(); } void checkExpectedValue(Versioned vv, String expectedName, Integer expectedCounter, Version expectedVersion) { Record expected = new Record(expectedName, expectedCounter, expectedVersion); expected.checkEqual(vv); } protected Integer getRandom() { return (int)(Math.random()*65536); } protected Versioned getRecord(String recordId) throws Exception { try { return myTable.get(recordId); } catch (MSException.NoKeyException nke) { return null; } } /** * get record with specific fields, assume record EXIST! */ protected Versioned getExistRecordFields(String recordId, Set fields) throws Exception { Versioned retValue = myTable.get(recordId, fields); return retValue; } /** * put and check fields */ protected void putAndCheck(String recordId, String name, Integer counter, Version version, Record expected, Code expectedCode) throws Exception { Version retVersion = null; Code code = Code.OperationFailure; try { retVersion = myTable.put(recordId, makeValue(name, counter), version); code = Code.OK; } catch (MSException.BadVersionException bve) { code = Code.BadVersion; } catch (MSException.NoKeyException nke) { code = Code.NoKey; } catch (MSException.KeyExistsException kee) { code = Code.KeyExists; } assertEquals(expectedCode, code); // get and check all fields of record if (Code.OK == code) { assertTrue(isEqualVersion(retVersion, nextVersion(version))); expected.merge(name, counter, retVersion); } Versioned existedVV = getRecord(recordId); if (null == expected) { assertNull(existedVV); } else { expected.checkEqual(existedVV); } } protected void clearTable() throws Exception { MetastoreCursor cursor = myTable.openCursor(); if (!cursor.hasMoreEntries()) { return; } while (cursor.hasMoreEntries()) { Iterator iter = cursor.readEntries(99); while (iter.hasNext()) { MetastoreTableItem item = iter.next(); String key = item.getKey(); myTable.remove(key, Version.ANY); } } cursor.close(); } /** * Test (get, get partial field, remove) on non-existent element. */ @Test(timeout=60000) public void testNonExistent() throws Exception { // get try { myTable.get(RECORDID); fail("Should fail to get a non-existent key"); } catch (MSException.NoKeyException nke) { } // get partial field Set fields = new HashSet(Arrays.asList(new String[] { FIELD_COUNTER })); try { myTable.get(RECORDID, fields); fail("Should fail to get a non-existent key with specified fields"); } catch (MSException.NoKeyException nke) { } // remove try { myTable.remove(RECORDID, Version.ANY); fail("Should fail to delete a non-existent key"); } catch (MSException.NoKeyException nke) { } } /** * Test usage of get operation on (full and partial) fields. */ @Test(timeout=60000) public void testGet() throws Exception { Versioned vv; final Set fields = new HashSet(Arrays.asList(new String[] { FIELD_NAME })); final String name = "get"; final Integer counter = getRandom(); // put test item Version version = myTable.put(RECORDID, makeValue(name, counter), Version.NEW); assertNotNull(version); // fetch with all fields vv = getExistRecordFields(RECORDID, ALL_FIELDS); checkExpectedValue(vv, name, counter, version); // partial get name vv = getExistRecordFields(RECORDID, fields); checkExpectedValue(vv, name, null, version); // non fields vv = getExistRecordFields(RECORDID, NON_FIELDS); checkExpectedValue(vv, null, null, version); // get null key should fail try { getExistRecordFields(null, NON_FIELDS); fail("Should fail to get null key with NON fields"); } catch (MSException.IllegalOpException ioe) { } try { getExistRecordFields(null, ALL_FIELDS); fail("Should fail to get null key with ALL fields."); } catch (MSException.IllegalOpException ioe) { } try { getExistRecordFields(null, fields); fail("Should fail to get null key with fields " + fields); } catch (MSException.IllegalOpException ioe) { } } /** * Test usage of put operation with (full and partial) fields. */ @Test(timeout=60000) public void testPut() throws Exception { final Integer counter = getRandom(); final String name = "put"; Version version; /** * test correct version put */ // put test item version = myTable.put(RECORDID, makeValue(name, counter), Version.NEW); assertNotNull(version); Record expected = new Record(name, counter, version); // correct version put with only name field changed putAndCheck(RECORDID, "name1", null, expected.getVersion(), expected, Code.OK); // correct version put with only counter field changed putAndCheck(RECORDID, null, counter + 1, expected.getVersion(), expected, Code.OK); // correct version put with all fields filled putAndCheck(RECORDID, "name2", counter + 2, expected.getVersion(), expected, Code.OK); // test put exist entry with Version.ANY checkPartialPut("put exist entry with Version.ANY", Version.ANY, expected, Code.OK); /** * test bad version put */ // put to existed entry with Version.NEW badVersionedPut(Version.NEW, Code.KeyExists); // put to existed entry with bad version badVersionedPut(newBadVersion(), Code.BadVersion); // remove the entry myTable.remove(RECORDID, Version.ANY); // put to non-existent entry with bad version badVersionedPut(newBadVersion(), Code.NoKey); // put to non-existent entry with Version.ANY badVersionedPut(Version.ANY, Code.NoKey); /** * test illegal arguments */ illegalPut(null, Version.NEW); illegalPut(makeValue("illegal value", getRandom()), null); illegalPut(null, null); } protected void badVersionedPut(Version badVersion, Code expectedCode) throws Exception { Versioned vv = getRecord(RECORDID); Record expected = null; if (expectedCode != Code.NoKey) { assertNotNull(vv); expected = new Record(vv); } checkPartialPut("badVersionedPut", badVersion, expected, expectedCode); } protected void checkPartialPut(String name, Version version, Record expected, Code expectedCode) throws Exception { Integer counter; // bad version put with all fields filled counter = getRandom(); putAndCheck(RECORDID, name + counter, counter, version, expected, expectedCode); // bad version put with only name field changed counter = getRandom(); putAndCheck(RECORDID, name + counter, null, version, expected, expectedCode); // bad version put with only counter field changed putAndCheck(RECORDID, null, counter, version, expected, expectedCode); } protected void illegalPut(Value value, Version version) throws MSException { try { myTable.put(RECORDID, value, version); fail("Should fail to do versioned put with illegal arguments"); } catch (MSException.IllegalOpException ioe) { } } /** * Test usage of (unconditional remove, BadVersion remove, CorrectVersion * remove) operation. */ @Test(timeout=60000) public void testRemove() throws Exception { final Integer counter = getRandom(); final String name = "remove"; Version version; // insert test item version = myTable.put(RECORDID, makeValue(name, counter), Version.NEW); assertNotNull(version); // test unconditional remove myTable.remove(RECORDID, Version.ANY); // insert test item version = myTable.put(RECORDID, makeValue(name, counter), Version.NEW); assertNotNull(version); // test remove with bad version try { myTable.remove(RECORDID, Version.NEW); fail("Should fail to remove a given key with bad version"); } catch (MSException.BadVersionException bve) { } try { myTable.remove(RECORDID, newBadVersion()); fail("Should fail to remove a given key with bad version"); } catch (MSException.BadVersionException bve) { } // test remove with correct version myTable.remove(RECORDID, version); } protected void openCursorTest(MetastoreCursor cursor, Map expectedValues, int numEntriesPerScan) throws Exception { try { Map entries = Maps.newHashMap(); while (cursor.hasMoreEntries()) { Iterator iter = cursor.readEntries(numEntriesPerScan); while (iter.hasNext()) { MetastoreTableItem item = iter.next(); entries.put(item.getKey(), item.getValue().getValue()); } } MapDifference diff = Maps.difference(expectedValues, entries); assertTrue(diff.areEqual()); } finally { cursor.close(); } } void openRangeCursorTest(String firstKey, boolean firstInclusive, String lastKey, boolean lastInclusive, Order order, Set fields, Iterator> expectedValues, int numEntriesPerScan) throws Exception { MetastoreCursor cursor = myTable.openCursor(firstKey, firstInclusive, lastKey, lastInclusive, order, fields); try { while (cursor.hasMoreEntries()) { Iterator iter = cursor.readEntries(numEntriesPerScan); while (iter.hasNext()) { assertTrue(expectedValues.hasNext()); MetastoreTableItem item = iter.next(); Map.Entry expectedItem = expectedValues.next(); assertEquals(expectedItem.getKey(), item.getKey()); assertEquals(expectedItem.getValue(), item.getValue().getValue()); } } assertFalse(expectedValues.hasNext()); } finally { cursor.close(); } } /** * Test usage of (scan) operation on (full and partial) fields. */ @Test(timeout=60000) public void testOpenCursor() throws Exception { TreeMap allValues = Maps.newTreeMap(); TreeMap partialValues = Maps.newTreeMap(); TreeMap nonValues = Maps.newTreeMap(); Set counterFields = Sets.newHashSet(FIELD_COUNTER); for (int i=5; i<24; i++) { char c = (char)('a' + i); String key = String.valueOf(c); Value v = makeValue("value" + i, i); Value cv = v.project(counterFields); Value nv = v.project(NON_FIELDS); myTable.put(key, new Value(v), Version.NEW); allValues.put(key, v); partialValues.put(key, cv); nonValues.put(key, nv); } // test open cursor MetastoreCursor cursor = myTable.openCursor(ALL_FIELDS); openCursorTest(cursor, allValues, 7); cursor = myTable.openCursor(counterFields); openCursorTest(cursor, partialValues, 7); cursor = myTable.openCursor(NON_FIELDS); openCursorTest(cursor, nonValues, 7); // test order inclusive exclusive Iterator> expectedIterator; expectedIterator = allValues.subMap("l", true, "u", true).entrySet().iterator(); openRangeCursorTest("l", true, "u", true, Order.ASC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.descendingMap().subMap("u", true, "l", true) .entrySet().iterator(); openRangeCursorTest("u", true, "l", true, Order.DESC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.subMap("l", false, "u", false).entrySet().iterator(); openRangeCursorTest("l", false, "u", false, Order.ASC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.descendingMap().subMap("u", false, "l", false) .entrySet().iterator(); openRangeCursorTest("u", false, "l", false, Order.DESC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.subMap("l", true, "u", false).entrySet().iterator(); openRangeCursorTest("l", true, "u", false, Order.ASC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.descendingMap().subMap("u", true, "l", false) .entrySet().iterator(); openRangeCursorTest("u", true, "l", false, Order.DESC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.subMap("l", false, "u", true).entrySet().iterator(); openRangeCursorTest("l", false, "u", true, Order.ASC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.descendingMap().subMap("u", false, "l", true) .entrySet().iterator(); openRangeCursorTest("u", false, "l", true, Order.DESC, ALL_FIELDS, expectedIterator, 7); // test out of range String firstKey = "f"; String lastKey = "x"; expectedIterator = allValues.subMap(firstKey, true, lastKey, true) .entrySet().iterator(); openRangeCursorTest("a", true, "z", true, Order.ASC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.subMap("l", true, lastKey, true).entrySet().iterator(); openRangeCursorTest("l", true, "z", true, Order.ASC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.subMap(firstKey, true, "u", true).entrySet().iterator(); openRangeCursorTest("a", true, "u", true, Order.ASC, ALL_FIELDS, expectedIterator, 7); // test EMPTY_START_KEY and EMPTY_END_KEY expectedIterator = allValues.subMap(firstKey, true, "u", true).entrySet().iterator(); openRangeCursorTest(EMPTY_START_KEY, true, "u", true, Order.ASC, ALL_FIELDS, expectedIterator, 7); expectedIterator = allValues.descendingMap().subMap(lastKey, true, "l", true) .entrySet().iterator(); openRangeCursorTest(EMPTY_END_KEY, true, "l", true, Order.DESC, ALL_FIELDS, expectedIterator, 7); // test illegal arguments try { myTable.openCursor("a", true, "z", true, Order.DESC, ALL_FIELDS); fail("Should fail with wrong range"); } catch (MSException.IllegalOpException ioe) { } try { myTable.openCursor("z", true, "a", true, Order.ASC, ALL_FIELDS); fail("Should fail with wrong range"); } catch (MSException.IllegalOpException ioe) { } try { myTable.openCursor("a", true, "z", true, null, ALL_FIELDS); fail("Should fail with null order"); } catch (MSException.IllegalOpException ioe) { } } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/proto/000077500000000000000000000000001244507361200307135ustar00rootroot00000000000000TestBKStats.java000066400000000000000000000027611244507361200336600ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/proto/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.proto; import static org.junit.Assert.assertEquals; import org.apache.bookkeeper.proto.BKStats.OpStats; import org.junit.Test; /** Tests that Statistics updation in Bookie Server */ public class TestBKStats { /** * Tests that updatLatency should not fail with * ArrayIndexOutOfBoundException when latency time coming as negative. */ @Test(timeout=60000) public void testUpdateLatencyShouldNotFailWithAIOBEWithNegativeLatency() throws Exception { OpStats opStat = new OpStats(); opStat.updateLatency(-10); assertEquals("Should not update any latency metrics", 0, opStat.numSuccessOps); } } TestDeathwatcher.java000066400000000000000000000040761244507361200347510ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/protopackage org.apache.bookkeeper.proto; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import org.junit.*; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.conf.ServerConfiguration; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests for the BookieServer death watcher */ public class TestDeathwatcher extends BookKeeperClusterTestCase { static Logger LOG = LoggerFactory.getLogger(TestDeathwatcher.class); public TestDeathwatcher() { super(1); } /** * Ensure that if the autorecovery daemon is running inside the bookie * then a failure/crash in the autorecovery daemon will not take down the * bookie also. */ @Test(timeout=30000) public void testAutorecoveryFailureDoesntKillBookie() throws Exception { ServerConfiguration conf = newServerConfiguration().setAutoRecoveryDaemonEnabled(true); BookieServer bs = startBookie(conf); assertNotNull("Autorecovery daemon should exist", bs.autoRecoveryMain); assertTrue("Bookie should be running", bs.isBookieRunning()); bs.autoRecoveryMain.shutdown(); Thread.sleep(conf.getDeathWatchInterval()*2); // give deathwatcher time to run assertTrue("Bookie should be running", bs.isBookieRunning()); bs.shutdown(); } } TestPerChannelBookieClient.java000066400000000000000000000276671244507361200366700ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/protopackage org.apache.bookkeeper.proto; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import org.junit.*; import java.net.InetSocketAddress; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.Executors; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.nio.ByteBuffer; import java.io.IOException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.ReadEntryCallback; import org.apache.bookkeeper.proto.PerChannelBookieClient.ConnectionState; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.util.SafeRunnable; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.jboss.netty.channel.Channel; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests for PerChannelBookieClient. Historically, this class has * had a few race conditions, so this is what these tests focus on. */ public class TestPerChannelBookieClient extends BookKeeperClusterTestCase { static Logger LOG = LoggerFactory.getLogger(TestPerChannelBookieClient.class); public TestPerChannelBookieClient() { super(1); } /** * Test that a race does not exist between connection completion * and client closure. If a race does exist, this test will simply * hang at releaseExternalResources() as it is uninterruptible. * This specific race was found in * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-485}. */ @Test(timeout=60000) public void testConnectCloseRace() throws Exception { ClientSocketChannelFactory channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); OrderedSafeExecutor executor = new OrderedSafeExecutor(1); InetSocketAddress addr = getBookie(0); AtomicLong bytesOutstanding = new AtomicLong(0); for (int i = 0; i < 1000; i++) { PerChannelBookieClient client = new PerChannelBookieClient(executor, channelFactory, addr, bytesOutstanding); client.connectIfNeededAndDoOp(new GenericCallback() { @Override public void operationComplete(int rc, Void result) { // do nothing, we don't care about doing anything with the connection, // we just want to trigger it connecting. } }); client.close(); } channelFactory.releaseExternalResources(); executor.shutdown(); } /** * Test race scenario found in {@link https://issues.apache.org/jira/browse/BOOKKEEPER-5} * where multiple clients try to connect a channel simultaneously. If not synchronised * correctly, this causes the netty channel to get orphaned. */ @Test(timeout=60000) public void testConnectRace() throws Exception { GenericCallback nullop = new GenericCallback() { @Override public void operationComplete(int rc, Void result) { // do nothing, we don't care about doing anything with the connection, // we just want to trigger it connecting. } }; ClientSocketChannelFactory channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); OrderedSafeExecutor executor = new OrderedSafeExecutor(1); InetSocketAddress addr = getBookie(0); AtomicLong bytesOutstanding = new AtomicLong(0); for (int i = 0; i < 100; i++) { PerChannelBookieClient client = new PerChannelBookieClient(executor, channelFactory, addr, bytesOutstanding); for (int j = i; j < 10; j++) { client.connectIfNeededAndDoOp(nullop); } client.close(); } channelFactory.releaseExternalResources(); executor.shutdown(); } /** * Test that all resources are freed if connections and disconnections * are interleaved randomly. * * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-620} */ @Test(timeout=60000) public void testDisconnectRace() throws Exception { final GenericCallback nullop = new GenericCallback() { @Override public void operationComplete(int rc, Void result) { // do nothing, we don't care about doing anything with the connection, // we just want to trigger it connecting. } }; final int ITERATIONS = 100000; ClientSocketChannelFactory channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); OrderedSafeExecutor executor = new OrderedSafeExecutor(1); InetSocketAddress addr = getBookie(0); AtomicLong bytesOutstanding = new AtomicLong(0); final PerChannelBookieClient client = new PerChannelBookieClient(executor, channelFactory, addr, bytesOutstanding); final AtomicBoolean shouldFail = new AtomicBoolean(false); final AtomicBoolean running = new AtomicBoolean(true); final CountDownLatch disconnectRunning = new CountDownLatch(1); Thread connectThread = new Thread() { public void run() { try { if (!disconnectRunning.await(10, TimeUnit.SECONDS)) { LOG.error("Disconnect thread never started"); shouldFail.set(true); } } catch (InterruptedException ie) { LOG.error("Connect thread interrupted", ie); Thread.currentThread().interrupt(); running.set(false); } for (int i = 0; i < ITERATIONS && running.get(); i++) { client.connectIfNeededAndDoOp(nullop); } running.set(false); } }; Thread disconnectThread = new Thread() { public void run() { disconnectRunning.countDown(); while (running.get()) { client.disconnect(); } } }; Thread checkThread = new Thread() { public void run() { ConnectionState state; Channel channel; while (running.get()) { synchronized (client) { state = client.state; channel = client.channel; if ((state == ConnectionState.CONNECTED && (channel == null || !channel.isConnected())) || (state != ConnectionState.CONNECTED && channel != null && channel.isConnected())) { LOG.error("State({}) and channel({}) inconsistent " + channel, state, channel == null ? null : channel.isConnected()); shouldFail.set(true); running.set(false); } } } } }; connectThread.start(); disconnectThread.start(); checkThread.start(); connectThread.join(); disconnectThread.join(); checkThread.join(); assertFalse("Failure in threads, check logs", shouldFail.get()); client.close(); channelFactory.releaseExternalResources(); executor.shutdown(); } /** * Test that requests are completed even if the channel is disconnected * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-668} */ @Test(timeout=60000) public void testRequestCompletesAfterDisconnectRace() throws Exception { ServerConfiguration conf = killBookie(0); Bookie delayBookie = new Bookie(conf) { @Override public ByteBuffer readEntry(long ledgerId, long entryId) throws IOException, NoLedgerException { try { Thread.sleep(3000); } catch (InterruptedException ie) { throw new IOException("Interrupted waiting", ie); } return super.readEntry(ledgerId, entryId); } }; bsConfs.add(conf); bs.add(startBookie(conf, delayBookie)); ClientSocketChannelFactory channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); final OrderedSafeExecutor executor = new OrderedSafeExecutor(1); InetSocketAddress addr = getBookie(0); AtomicLong bytesOutstanding = new AtomicLong(0); final PerChannelBookieClient client = new PerChannelBookieClient(executor, channelFactory, addr, bytesOutstanding); final CountDownLatch completion = new CountDownLatch(1); final ReadEntryCallback cb = new ReadEntryCallback() { @Override public void readEntryComplete(int rc, long ledgerId, long entryId, ChannelBuffer buffer, Object ctx) { completion.countDown(); } }; client.connectIfNeededAndDoOp(new GenericCallback() { @Override public void operationComplete(final int rc, Void result) { if (rc != BKException.Code.OK) { executor.submitOrdered(1, new SafeRunnable() { @Override public void safeRun() { cb.readEntryComplete(rc, 1, 1, null, null); } }); return; } client.readEntryAndFenceLedger(1, "00000111112222233333".getBytes(), 1, cb, null); } }); Thread.sleep(1000); client.disconnect(); client.close(); channelFactory.releaseExternalResources(); executor.shutdown(); assertTrue("Request should have completed", completion.await(5, TimeUnit.SECONDS)); } } TestProtoVersions.java000066400000000000000000000110121244507361200351660ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/proto/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.proto; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.ReadEntryCallback; import org.jboss.netty.buffer.ChannelBuffer; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.test.BaseTestCase; import org.apache.bookkeeper.test.BookieClientTest; import static org.junit.Assert.*; import org.junit.Test; import org.junit.Before; import org.junit.After; import java.util.concurrent.TimeUnit; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import java.net.InetSocketAddress; import java.net.InetAddress; public class TestProtoVersions { private BookieClientTest base; @Before public void setup() throws Exception { base = new BookieClientTest(); base.setUp(); } @After public void teardown() throws Exception { base.tearDown(); } private void testVersion(int version, int expectedresult) throws Exception { PerChannelBookieClient bc = new PerChannelBookieClient(base.executor, base.channelFactory, new InetSocketAddress(InetAddress.getLocalHost(), base.port), new AtomicLong(0)); final AtomicInteger outerrc = new AtomicInteger(-1); final CountDownLatch connectLatch = new CountDownLatch(1); bc.connectIfNeededAndDoOp(new GenericCallback() { public void operationComplete(int rc, Void result) { outerrc.set(rc); connectLatch.countDown(); } }); connectLatch.await(5, TimeUnit.SECONDS); assertEquals("client not connected", BKException.Code.OK, outerrc.get()); outerrc.set(-1000); final CountDownLatch readLatch = new CountDownLatch(1); ReadEntryCallback cb = new ReadEntryCallback() { public void readEntryComplete(int rc, long ledgerId, long entryId, ChannelBuffer buffer, Object ctx) { outerrc.set(rc); readLatch.countDown(); } }; bc.readCompletions.put(bc.newCompletionKey(1, 1), new PerChannelBookieClient.ReadCompletion(cb, this)); int totalHeaderSize = 4 // for the length of the packet + 4 // for request type + 8 // for ledgerId + 8; // for entryId // This will need to updated if the protocol for read changes ChannelBuffer tmpEntry = bc.channel.getConfig().getBufferFactory().getBuffer(totalHeaderSize); tmpEntry.writeInt(totalHeaderSize - 4); tmpEntry.writeInt(new BookieProtocol.PacketHeader((byte)version, BookieProtocol.READENTRY, (short)0).toInt()); tmpEntry.writeLong(1); tmpEntry.writeLong(1); bc.channel.write(tmpEntry).awaitUninterruptibly(); readLatch.await(5, TimeUnit.SECONDS); assertEquals("Expected result differs", expectedresult, outerrc.get()); bc.close(); } @Test(timeout=60000) public void testVersions() throws Exception { testVersion(BookieProtocol.LOWEST_COMPAT_PROTOCOL_VERSION-1, BKException.Code.ProtocolVersionException); testVersion(BookieProtocol.LOWEST_COMPAT_PROTOCOL_VERSION, BKException.Code.NoSuchEntryException); testVersion(BookieProtocol.CURRENT_PROTOCOL_VERSION, BKException.Code.NoSuchEntryException); testVersion(BookieProtocol.CURRENT_PROTOCOL_VERSION+1, BKException.Code.ProtocolVersionException); } }bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/000077500000000000000000000000001244507361200320615ustar00rootroot00000000000000AuditorBookieTest.java000066400000000000000000000243471244507361200362570ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.util.HashMap; import java.util.LinkedList; import java.util.List; import junit.framework.Assert; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.util.StringUtils; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This test verifies the auditor bookie scenarios which will be monitoring the * bookie failures */ public class AuditorBookieTest extends BookKeeperClusterTestCase { // Depending on the taste, select the amount of logging // by decommenting one of the two lines below // static Logger LOG = Logger.getRootLogger(); private final static Logger LOG = LoggerFactory .getLogger(AuditorBookieTest.class); private String electionPath; private HashMap auditorElectors = new HashMap(); private List zkClients = new LinkedList(); public AuditorBookieTest() { super(6); electionPath = baseConf.getZkLedgersRootPath() + "/underreplication/auditorelection"; } @Override public void setUp() throws Exception { super.setUp(); startAuditorElectors(); } @Override public void tearDown() throws Exception { stopAuditorElectors(); for (ZooKeeper zk : zkClients) { zk.close(); } zkClients.clear(); super.tearDown(); } /** * Test should ensure only one should act as Auditor. Starting/shutdown * other than auditor bookie shouldn't initiate re-election and multiple * auditors. */ @Test(timeout=60000) public void testEnsureOnlySingleAuditor() throws Exception { BookieServer auditor = verifyAuditor(); // shutdown bookie which is not an auditor int indexOf = bs.indexOf(auditor); int bkIndexDownBookie; if (indexOf < bs.size() - 1) { bkIndexDownBookie = indexOf + 1; } else { bkIndexDownBookie = indexOf - 1; } shutdownBookie(bs.get(bkIndexDownBookie)); startNewBookie(); startNewBookie(); // grace period for the auditor re-election if any BookieServer newAuditor = waitForNewAuditor(auditor); Assert.assertSame( "Auditor re-election is not happened for auditor failure!", auditor, newAuditor); } /** * Test Auditor crashes should trigger re-election and another bookie should * take over the auditor ship */ @Test(timeout=60000) public void testSuccessiveAuditorCrashes() throws Exception { BookieServer auditor = verifyAuditor(); shutdownBookie(auditor); BookieServer newAuditor1 = waitForNewAuditor(auditor); bs.remove(auditor); shutdownBookie(newAuditor1); BookieServer newAuditor2 = waitForNewAuditor(newAuditor1); Assert.assertNotSame( "Auditor re-election is not happened for auditor failure!", auditor, newAuditor2); bs.remove(newAuditor1); } /** * Test restarting the entire bookie cluster. It shouldn't create multiple * bookie auditors */ @Test(timeout=60000) public void testBookieClusterRestart() throws Exception { BookieServer auditor = verifyAuditor(); for (AuditorElector auditorElector : auditorElectors.values()) { assertTrue("Auditor elector is not running!", auditorElector .isRunning()); } stopBKCluster(); stopAuditorElectors(); startBKCluster(); startAuditorElectors(); BookieServer newAuditor = waitForNewAuditor(auditor); Assert.assertNotSame( "Auditor re-election is not happened for auditor failure!", auditor, newAuditor); } /** * Test the vote is deleting from the ZooKeeper during shutdown. */ @Test(timeout=60000) public void testShutdown() throws Exception { BookieServer auditor = verifyAuditor(); shutdownBookie(auditor); // waiting for new auditor BookieServer newAuditor = waitForNewAuditor(auditor); Assert.assertNotSame( "Auditor re-election is not happened for auditor failure!", auditor, newAuditor); int indexOfDownBookie = bs.indexOf(auditor); bs.remove(indexOfDownBookie); bsConfs.remove(indexOfDownBookie); tmpDirs.remove(indexOfDownBookie); List children = zkc.getChildren(electionPath, false); for (String child : children) { byte[] data = zkc.getData(electionPath + '/' + child, false, null); String bookieIP = new String(data); String addr = StringUtils.addrToString(auditor.getLocalAddress()); Assert.assertFalse("AuditorElection cleanup fails", bookieIP .contains(addr)); } } /** * Test restart of the previous Auditor bookie shouldn't initiate * re-election and should create new vote after restarting. */ @Test(timeout=60000) public void testRestartAuditorBookieAfterCrashing() throws Exception { BookieServer auditor = verifyAuditor(); shutdownBookie(auditor); String addr = StringUtils.addrToString(auditor.getLocalAddress()); // restarting Bookie with same configurations. int indexOfDownBookie = bs.indexOf(auditor); ServerConfiguration serverConfiguration = bsConfs .get(indexOfDownBookie); bs.remove(indexOfDownBookie); bsConfs.remove(indexOfDownBookie); tmpDirs.remove(indexOfDownBookie); auditorElectors.remove(addr); startBookie(serverConfiguration); // starting corresponding auditor elector LOG.debug("Performing Auditor Election:" + addr); startAuditorElector(addr); // waiting for new auditor to come BookieServer newAuditor = waitForNewAuditor(auditor); Assert.assertNotSame( "Auditor re-election is not happened for auditor failure!", auditor, newAuditor); Assert.assertFalse("No relection after old auditor rejoins", auditor .getLocalAddress().getPort() == newAuditor.getLocalAddress() .getPort()); } private void startAuditorElector(String addr) throws Exception { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(10000); ZooKeeper zk = ZkUtils.createConnectedZookeeperClient( zkUtil.getZooKeeperConnectString(), w); zkClients.add(zk); AuditorElector auditorElector = new AuditorElector(addr, baseConf, zk); auditorElectors.put(addr, auditorElector); auditorElector.start(); LOG.debug("Starting Auditor Elector"); } private void startAuditorElectors() throws Exception { for (BookieServer bserver : bs) { String addr = StringUtils.addrToString(bserver.getLocalAddress()); startAuditorElector(addr); } } private void stopAuditorElectors() throws Exception { for (AuditorElector auditorElector : auditorElectors.values()) { auditorElector.shutdown(); LOG.debug("Stopping Auditor Elector!"); } } private BookieServer verifyAuditor() throws Exception { List auditors = getAuditorBookie(); Assert.assertEquals("Multiple Bookies acting as Auditor!", 1, auditors .size()); LOG.debug("Bookie running as Auditor:" + auditors.get(0)); return auditors.get(0); } private List getAuditorBookie() throws Exception { List auditors = new LinkedList(); byte[] data = zkc.getData(electionPath, false, null); Assert.assertNotNull("Auditor election failed", data); for (BookieServer bks : bs) { if (new String(data).contains(bks.getLocalAddress().getPort() + "")) { auditors.add(bks); } } return auditors; } private void shutdownBookie(BookieServer bkServer) throws Exception { String addr = StringUtils.addrToString(bkServer.getLocalAddress()); LOG.debug("Shutting down bookie:" + addr); // shutdown bookie which is an auditor bkServer.shutdown(); // stopping corresponding auditor elector auditorElectors.get(addr).shutdown(); } private BookieServer waitForNewAuditor(BookieServer auditor) throws Exception { BookieServer newAuditor = null; int retryCount = 8; while (retryCount > 0) { List auditors = getAuditorBookie(); if (auditors.size() > 0) { newAuditor = auditors.get(0); if (auditor != newAuditor) { break; } } Thread.sleep(500); retryCount--; } Assert.assertNotNull( "New Auditor is not reelected after auditor crashes", newAuditor); verifyAuditor(); return newAuditor; } } AuditorLedgerCheckerTest.java000066400000000000000000000355531244507361200375370ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Random; import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.ZkLedgerUnderreplicationManager; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.proto.DataFormats.UnderreplicatedLedgerFormat; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.bookkeeper.test.MultiLedgerManagerTestCase; import org.apache.bookkeeper.util.StringUtils; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests publishing of under replicated ledgers by the Auditor bookie node when * corresponding bookies identifes as not running */ public class AuditorLedgerCheckerTest extends MultiLedgerManagerTestCase { // Depending on the taste, select the amount of logging // by decommenting one of the two lines below // static Logger LOG = Logger.getRootLogger(); private final static Logger LOG = LoggerFactory .getLogger(AuditorLedgerCheckerTest.class); private static final byte[] ledgerPassword = "aaa".getBytes(); private Random rng; // Random Number Generator private DigestType digestType; private final String UNDERREPLICATED_PATH = baseClientConf .getZkLedgersRootPath() + "/underreplication/ledgers"; private HashMap auditorElectors = new HashMap(); private ZkLedgerUnderreplicationManager urLedgerMgr; private Set urLedgerList; private List ledgerList; public AuditorLedgerCheckerTest(String ledgerManagerFactoryClass) throws IOException, KeeperException, InterruptedException, CompatibilityException { super(3); LOG.info("Running test case using ledger manager : " + ledgerManagerFactoryClass); this.digestType = DigestType.CRC32; // set ledger manager name baseConf.setLedgerManagerFactoryClassName(ledgerManagerFactoryClass); baseClientConf .setLedgerManagerFactoryClassName(ledgerManagerFactoryClass); } @Before public void setUp() throws Exception { super.setUp(); urLedgerMgr = new ZkLedgerUnderreplicationManager(baseClientConf, zkc); startAuditorElectors(); rng = new Random(System.currentTimeMillis()); // Initialize the Random urLedgerList = new HashSet(); ledgerList = new ArrayList(2); } @Override public void tearDown() throws Exception { stopAuditorElectors(); super.tearDown(); } private void startAuditorElectors() throws Exception { for (BookieServer bserver : bs) { String addr = StringUtils.addrToString(bserver.getLocalAddress()); AuditorElector auditorElector = new AuditorElector(addr, baseConf, zkc); auditorElectors.put(addr, auditorElector); auditorElector.start(); LOG.debug("Starting Auditor Elector"); } } private void stopAuditorElectors() throws Exception { for (AuditorElector auditorElector : auditorElectors.values()) { auditorElector.shutdown(); LOG.debug("Stopping Auditor Elector!"); } } /** * Test publishing of under replicated ledgers by the auditor bookie */ @Test(timeout=60000) public void testSimpleLedger() throws Exception { LedgerHandle lh1 = createAndAddEntriesToLedger(); Long ledgerId = lh1.getId(); LOG.debug("Created ledger : " + ledgerId); ledgerList.add(ledgerId); lh1.close(); final CountDownLatch underReplicaLatch = registerUrLedgerWatcher(ledgerList .size()); int bkShutdownIndex = bs.size() - 1; String shutdownBookie = shutdownBookie(bkShutdownIndex); // grace period for publishing the bk-ledger LOG.debug("Waiting for ledgers to be marked as under replicated"); underReplicaLatch.await(5, TimeUnit.SECONDS); Map urLedgerData = getUrLedgerData(urLedgerList); assertEquals("Missed identifying under replicated ledgers", 1, urLedgerList.size()); /* * Sample data format present in the under replicated ledger path * * {4=replica: "10.18.89.153:5002"} */ assertTrue("Ledger is not marked as underreplicated:" + ledgerId, urLedgerList.contains(ledgerId)); String data = urLedgerData.get(ledgerId); assertTrue("Bookie " + shutdownBookie + "is not listed in the ledger as missing replica :" + data, data.contains(shutdownBookie)); } /** * Test once published under replicated ledger should exists even after * restarting respective bookie */ @Test(timeout=60000) public void testRestartBookie() throws Exception { LedgerHandle lh1 = createAndAddEntriesToLedger(); LedgerHandle lh2 = createAndAddEntriesToLedger(); LOG.debug("Created following ledgers : {}, {}", lh1, lh2); int bkShutdownIndex = bs.size() - 1; ServerConfiguration bookieConf1 = bsConfs.get(bkShutdownIndex); String shutdownBookie = shutdownBookie(bkShutdownIndex); // restart the failed bookie bs.add(startBookie(bookieConf1)); waitForLedgerMissingReplicas(lh1.getId(), 10, shutdownBookie); waitForLedgerMissingReplicas(lh2.getId(), 10, shutdownBookie); } /** * Test publishing of under replicated ledgers when multiple bookie failures * one after another. */ @Test(timeout=60000) public void testMultipleBookieFailures() throws Exception { LedgerHandle lh1 = createAndAddEntriesToLedger(); // failing first bookie shutdownBookie(bs.size() - 1); // simulate re-replication doLedgerRereplication(lh1.getId()); // failing another bookie String shutdownBookie = shutdownBookie(bs.size() - 1); // grace period for publishing the bk-ledger LOG.debug("Waiting for ledgers to be marked as under replicated"); assertTrue("Ledger should be missing second replica", waitForLedgerMissingReplicas(lh1.getId(), 10, shutdownBookie)); } @Test(timeout = 30000) public void testToggleLedgerReplication() throws Exception { LedgerHandle lh1 = createAndAddEntriesToLedger(); ledgerList.add(lh1.getId()); LOG.debug("Created following ledgers : " + ledgerList); // failing another bookie CountDownLatch urReplicaLatch = registerUrLedgerWatcher(ledgerList .size()); // disabling ledger replication urLedgerMgr.disableLedgerReplication(); ArrayList shutdownBookieList = new ArrayList(); shutdownBookieList.add(shutdownBookie(bs.size() - 1)); shutdownBookieList.add(shutdownBookie(bs.size() - 1)); assertFalse("Ledger replication is not disabled!", urReplicaLatch .await(5, TimeUnit.SECONDS)); // enabling ledger replication urLedgerMgr.enableLedgerReplication(); assertTrue("Ledger replication is not enabled!", urReplicaLatch.await( 5, TimeUnit.SECONDS)); } @Test(timeout = 20000) public void testDuplicateEnDisableAutoRecovery() throws Exception { urLedgerMgr.disableLedgerReplication(); try { urLedgerMgr.disableLedgerReplication(); fail("Must throw exception, since AutoRecovery is already disabled"); } catch (UnavailableException e) { assertTrue("AutoRecovery is not disabled previously!", e.getCause() instanceof KeeperException.NodeExistsException); } urLedgerMgr.enableLedgerReplication(); try { urLedgerMgr.enableLedgerReplication(); fail("Must throw exception, since AutoRecovery is already enabled"); } catch (UnavailableException e) { assertTrue("AutoRecovery is not enabled previously!", e.getCause() instanceof KeeperException.NoNodeException); } } /** * Test Auditor should consider Readonly bookie as available bookie. Should not publish ur ledgers for * readonly bookies. */ @Test(timeout = 20000) public void testReadOnlyBookieExclusionFromURLedgersCheck() throws Exception { LedgerHandle lh = createAndAddEntriesToLedger(); ledgerList.add(lh.getId()); LOG.debug("Created following ledgers : " + ledgerList); int count = ledgerList.size(); final CountDownLatch underReplicaLatch = registerUrLedgerWatcher(count); ServerConfiguration bookieConf = bsConfs.get(2); BookieServer bk = bs.get(2); bookieConf.setReadOnlyModeEnabled(true); bk.getBookie().transitionToReadOnlyMode(); // grace period for publishing the bk-ledger LOG.debug("Waiting for Auditor to finish ledger check."); assertFalse("latch should not have completed", underReplicaLatch.await(5, TimeUnit.SECONDS)); } /** * Wait for ledger to be underreplicated, and to be missing all replicas specified */ private boolean waitForLedgerMissingReplicas(Long ledgerId, long secondsToWait, String... replicas) throws Exception { for (int i = 0; i < secondsToWait; i++) { try { UnderreplicatedLedgerFormat data = urLedgerMgr.getLedgerUnreplicationInfo(ledgerId); boolean all = true; for (String r : replicas) { all = all && data.getReplicaList().contains(r); } if (all) { return true; } } catch (Exception e) { // may not find node } Thread.sleep(1000); } return false; } private CountDownLatch registerUrLedgerWatcher(int count) throws KeeperException, InterruptedException { final CountDownLatch underReplicaLatch = new CountDownLatch(count); for (Long ledgerId : ledgerList) { Watcher urLedgerWatcher = new ChildWatcher(underReplicaLatch); String znode = ZkLedgerUnderreplicationManager.getUrLedgerZnode(UNDERREPLICATED_PATH, ledgerId); zkc.exists(znode, urLedgerWatcher); } return underReplicaLatch; } private void doLedgerRereplication(Long... ledgerIds) throws UnavailableException { for (int i = 0; i < ledgerIds.length; i++) { long lid = urLedgerMgr.getLedgerToRereplicate(); assertTrue("Received unexpected ledgerid", Arrays.asList(ledgerIds).contains(lid)); urLedgerMgr.markLedgerReplicated(lid); urLedgerMgr.releaseUnderreplicatedLedger(lid); } } private String shutdownBookie(int bkShutdownIndex) throws Exception { BookieServer bkServer = bs.get(bkShutdownIndex); String bookieAddr = StringUtils.addrToString(bkServer.getLocalAddress()); LOG.debug("Shutting down bookie:" + bookieAddr); killBookie(bkShutdownIndex); auditorElectors.get(bookieAddr).shutdown(); auditorElectors.remove(bookieAddr); return bookieAddr; } private LedgerHandle createAndAddEntriesToLedger() throws BKException, InterruptedException { int numEntriesToWrite = 100; // Create a ledger LedgerHandle lh = bkc.createLedger(digestType, ledgerPassword); LOG.info("Ledger ID: " + lh.getId()); addEntry(numEntriesToWrite, lh); return lh; } private void addEntry(int numEntriesToWrite, LedgerHandle lh) throws InterruptedException, BKException { for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(Integer.MAX_VALUE)); entry.position(0); lh.addEntry(entry.array()); } } private Map getUrLedgerData(Set urLedgerList) throws KeeperException, InterruptedException { Map urLedgerData = new HashMap(); for (Long ledgerId : urLedgerList) { String znode = ZkLedgerUnderreplicationManager.getUrLedgerZnode(UNDERREPLICATED_PATH, ledgerId); byte[] data = zkc.getData(znode, false, null); urLedgerData.put(ledgerId, new String(data)); } return urLedgerData; } private class ChildWatcher implements Watcher { private final CountDownLatch underReplicaLatch; public ChildWatcher(CountDownLatch underReplicaLatch) { this.underReplicaLatch = underReplicaLatch; } @Override public void process(WatchedEvent event) { LOG.info("Received notification for the ledger path : " + event.getPath()); for (Long ledgerId : ledgerList) { if (event.getPath().contains(ledgerId + "")) { urLedgerList.add(Long.valueOf(ledgerId)); } } LOG.debug("Count down and waiting for next notification"); // count down and waiting for next notification underReplicaLatch.countDown(); } } } AuditorPeriodicBookieCheckTest.java000066400000000000000000000111751244507361200406670ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.test.TestCallbacks; import java.util.List; import java.net.InetSocketAddress; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerHandleAdapter; import org.apache.bookkeeper.client.LedgerMetadata; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.apache.bookkeeper.util.ZkUtils; import org.apache.zookeeper.ZooKeeper; import org.junit.Before; import org.junit.After; import org.junit.Test; import static org.junit.Assert.assertEquals; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This test verifies that the period check on the auditor * will pick up on missing data in the client */ public class AuditorPeriodicBookieCheckTest extends BookKeeperClusterTestCase { private final static Logger LOG = LoggerFactory .getLogger(AuditorPeriodicBookieCheckTest.class); private AuditorElector auditorElector = null; private ZooKeeper auditorZookeeper = null; private final static int CHECK_INTERVAL = 1; // run every second public AuditorPeriodicBookieCheckTest() { super(3); baseConf.setPageLimit(1); // to make it easy to push ledger out of cache baseConf.setAllowLoopback(true); } @Before @Override public void setUp() throws Exception { super.setUp(); ServerConfiguration conf = new ServerConfiguration(bsConfs.get(0)); conf.setAllowLoopback(true); conf.setAuditorPeriodicBookieCheckInterval(CHECK_INTERVAL); String addr = StringUtils.addrToString(bs.get(0).getLocalAddress()); ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(10000); auditorZookeeper = ZkUtils.createConnectedZookeeperClient( zkUtil.getZooKeeperConnectString(), w); auditorElector = new AuditorElector(addr, conf, auditorZookeeper); auditorElector.start(); } @After @Override public void tearDown() throws Exception { auditorElector.shutdown(); auditorZookeeper.close(); super.tearDown(); } /** * Test that the periodic bookie checker works */ @Test(timeout=30000) public void testPeriodicBookieCheckInterval() throws Exception { LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bsConfs.get(0), zkc); LedgerManager ledgerManager = mFactory.newLedgerManager(); final LedgerUnderreplicationManager underReplicationManager = mFactory.newLedgerUnderreplicationManager(); final int numLedgers = 1; LedgerHandle lh = bkc.createLedger(3, 3, DigestType.CRC32, "passwd".getBytes()); LedgerMetadata md = LedgerHandleAdapter.getLedgerMetadata(lh); List ensemble = md.getEnsembles().get(0L); ensemble.set(0, new InetSocketAddress("1.1.1.1", 1000)); TestCallbacks.GenericCallbackFuture cb = new TestCallbacks.GenericCallbackFuture(); ledgerManager.writeLedgerMetadata(lh.getId(), md, cb); cb.get(); long underReplicatedLedger = -1; for (int i = 0; i < 10; i++) { underReplicatedLedger = underReplicationManager.pollLedgerToRereplicate(); if (underReplicatedLedger != -1) { break; } Thread.sleep(CHECK_INTERVAL*1000); } assertEquals("Ledger should be under replicated", lh.getId(), underReplicatedLedger); } } AuditorPeriodicCheckTest.java000066400000000000000000000301261244507361200375330ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.CountDownLatch; import java.util.HashMap; import java.util.List; import java.util.LinkedList; import java.io.File; import java.io.FileOutputStream; import java.io.FilenameFilter; import java.io.IOException; import java.nio.ByteBuffer; import org.apache.bookkeeper.bookie.BookieAccessor; import org.apache.bookkeeper.util.StringUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.LedgerCacheImpl; import org.apache.zookeeper.ZooKeeper; import org.junit.Before; import org.junit.After; import org.junit.Test; import static org.junit.Assert.assertEquals; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This test verifies that the period check on the auditor * will pick up on missing data in the client */ public class AuditorPeriodicCheckTest extends BookKeeperClusterTestCase { private final static Logger LOG = LoggerFactory .getLogger(AuditorPeriodicCheckTest.class); private HashMap auditorElectors = new HashMap(); private List zkClients = new LinkedList(); private final static int CHECK_INTERVAL = 1; // run every second public AuditorPeriodicCheckTest() { super(3); baseConf.setPageLimit(1); // to make it easy to push ledger out of cache } @Before @Override public void setUp() throws Exception { super.setUp(); for (int i = 0; i < numBookies; i++) { ServerConfiguration conf = new ServerConfiguration(bsConfs.get(i)); conf.setAuditorPeriodicCheckInterval(CHECK_INTERVAL); String addr = StringUtils.addrToString(bs.get(i).getLocalAddress()); ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(10000); ZooKeeper zk = ZkUtils.createConnectedZookeeperClient( zkUtil.getZooKeeperConnectString(), w); zkClients.add(zk); AuditorElector auditorElector = new AuditorElector(addr, conf, zk); auditorElectors.put(addr, auditorElector); auditorElector.start(); LOG.debug("Starting Auditor Elector"); } } @After @Override public void tearDown() throws Exception { for (AuditorElector e : auditorElectors.values()) { e.shutdown(); } for (ZooKeeper zk : zkClients) { zk.close(); } zkClients.clear(); super.tearDown(); } /** * test that the periodic checking will detect corruptions in * the bookie entry log */ @Test(timeout=30000) public void testEntryLogCorruption() throws Exception { LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bsConfs.get(0), zkc); LedgerUnderreplicationManager underReplicationManager = mFactory.newLedgerUnderreplicationManager(); underReplicationManager.disableLedgerReplication(); LedgerHandle lh = bkc.createLedger(3, 3, DigestType.CRC32, "passwd".getBytes()); long ledgerId = lh.getId(); for (int i = 0; i < 100; i++) { lh.addEntry("testdata".getBytes()); } lh.close(); BookieAccessor.forceFlush(bs.get(0).getBookie()); File ledgerDir = bsConfs.get(0).getLedgerDirs()[0]; ledgerDir = Bookie.getCurrentDirectory(ledgerDir); // corrupt of entryLogs File[] entryLogs = ledgerDir.listFiles(new FilenameFilter() { public boolean accept(File dir, String name) { return name.endsWith(".log"); } }); ByteBuffer junk = ByteBuffer.allocate(1024*1024); for (File f : entryLogs) { FileOutputStream out = new FileOutputStream(f); out.getChannel().write(junk); out.close(); } restartBookies(); // restart to clear read buffers underReplicationManager.enableLedgerReplication(); long underReplicatedLedger = -1; for (int i = 0; i < 10; i++) { underReplicatedLedger = underReplicationManager.pollLedgerToRereplicate(); if (underReplicatedLedger != -1) { break; } Thread.sleep(CHECK_INTERVAL * 1000); } assertEquals("Ledger should be under replicated", ledgerId, underReplicatedLedger); underReplicationManager.close(); } /** * test that the period checker will detect corruptions in * the bookie index files */ @Test(timeout=30000) public void testIndexCorruption() throws Exception { LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bsConfs.get(0), zkc); LedgerUnderreplicationManager underReplicationManager = mFactory.newLedgerUnderreplicationManager(); LedgerHandle lh = bkc.createLedger(3, 3, DigestType.CRC32, "passwd".getBytes()); long ledgerToCorrupt = lh.getId(); for (int i = 0; i < 100; i++) { lh.addEntry("testdata".getBytes()); } lh.close(); // push ledgerToCorrupt out of page cache (bookie is configured to only use 1 page) lh = bkc.createLedger(3, 3, DigestType.CRC32, "passwd".getBytes()); for (int i = 0; i < 100; i++) { lh.addEntry("testdata".getBytes()); } lh.close(); BookieAccessor.forceFlush(bs.get(0).getBookie()); File ledgerDir = bsConfs.get(0).getLedgerDirs()[0]; ledgerDir = Bookie.getCurrentDirectory(ledgerDir); // corrupt of entryLogs File index = new File(ledgerDir, LedgerCacheImpl.getLedgerName(ledgerToCorrupt)); LOG.info("file to corrupt{}" , index); ByteBuffer junk = ByteBuffer.allocate(1024*1024); FileOutputStream out = new FileOutputStream(index); out.getChannel().write(junk); out.close(); long underReplicatedLedger = -1; for (int i = 0; i < 10; i++) { underReplicatedLedger = underReplicationManager.pollLedgerToRereplicate(); if (underReplicatedLedger != -1) { break; } Thread.sleep(CHECK_INTERVAL * 1000); } assertEquals("Ledger should be under replicated", ledgerToCorrupt, underReplicatedLedger); underReplicationManager.close(); } /** * Test that the period checker will not run when auto replication has been disabled */ @Test(timeout=60000) public void testPeriodicCheckWhenDisabled() throws Exception { LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bsConfs.get(0), zkc); final LedgerUnderreplicationManager underReplicationManager = mFactory.newLedgerUnderreplicationManager(); final int numLedgers = 100; for (int i = 0; i < numLedgers; i++) { LedgerHandle lh = bkc.createLedger(3, 3, DigestType.CRC32, "passwd".getBytes()); for (int j = 0; j < 100; j++) { lh.addEntry("testdata".getBytes()); } lh.close(); } underReplicationManager.disableLedgerReplication(); final AtomicInteger numReads = new AtomicInteger(0); ServerConfiguration conf = killBookie(0); Bookie deadBookie = new Bookie(conf) { @Override public ByteBuffer readEntry(long ledgerId, long entryId) throws IOException, NoLedgerException { // we want to disable during checking numReads.incrementAndGet(); throw new IOException("Fake I/O exception"); } }; bsConfs.add(conf); bs.add(startBookie(conf, deadBookie)); Thread.sleep(CHECK_INTERVAL * 2000); assertEquals("Nothing should have tried to read", 0, numReads.get()); underReplicationManager.enableLedgerReplication(); Thread.sleep(CHECK_INTERVAL * 2000); // give it time to run underReplicationManager.disableLedgerReplication(); // give it time to stop, from this point nothing new should be marked Thread.sleep(CHECK_INTERVAL * 2000); int numUnderreplicated = 0; long underReplicatedLedger = -1; do { underReplicatedLedger = underReplicationManager.pollLedgerToRereplicate(); if (underReplicatedLedger == -1) { break; } numUnderreplicated++; underReplicationManager.markLedgerReplicated(underReplicatedLedger); } while (underReplicatedLedger != -1); Thread.sleep(CHECK_INTERVAL * 2000); // give a chance to run again (it shouldn't, it's disabled) // ensure that nothing is marked as underreplicated underReplicatedLedger = underReplicationManager.pollLedgerToRereplicate(); assertEquals("There should be no underreplicated ledgers", -1, underReplicatedLedger); LOG.info("{} of {} ledgers underreplicated", numUnderreplicated, numUnderreplicated); assertTrue("All should be underreplicated", numUnderreplicated <= numLedgers && numUnderreplicated > 0); } /** * Test that the period check will succeed if a ledger is deleted midway */ @Test(timeout=60000) public void testPeriodicCheckWhenLedgerDeleted() throws Exception { for (AuditorElector e : auditorElectors.values()) { e.shutdown(); } final int numLedgers = 100; List ids = new LinkedList(); for (int i = 0; i < numLedgers; i++) { LedgerHandle lh = bkc.createLedger(3, 3, DigestType.CRC32, "passwd".getBytes()); ids.add(lh.getId()); for (int j = 0; j < 10; j++) { lh.addEntry("testdata".getBytes()); } lh.close(); } final Auditor auditor = new Auditor( StringUtils.addrToString(Bookie.getBookieAddress(bsConfs.get(0))), bsConfs.get(0), zkc); final AtomicBoolean exceptionCaught = new AtomicBoolean(false); final CountDownLatch latch = new CountDownLatch(1); Thread t = new Thread() { public void run() { try { latch.countDown(); for (int i = 0; i < numLedgers; i++) { auditor.checkAllLedgers(); } } catch (Exception e) { LOG.error("Caught exception while checking all ledgers", e); exceptionCaught.set(true); } } }; t.start(); latch.await(); for (Long id : ids) { bkc.deleteLedger(id); } t.join(); assertFalse("Shouldn't have thrown exception", exceptionCaught.get()); } } AuditorRollingRestartTest.java000066400000000000000000000055271244507361200400210ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.test.TestCallbacks; import java.net.InetSocketAddress; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.junit.Test; /** * Test auditor behaviours during a rolling restart */ public class AuditorRollingRestartTest extends BookKeeperClusterTestCase { public AuditorRollingRestartTest() { super(3); // run the daemon within the bookie baseConf.setAutoRecoveryDaemonEnabled(true); } /** * Test no auditing during restart if disabled */ @Test(timeout=600000) // 10 minutes public void testAuditingDuringRollingRestart() throws Exception { LedgerManagerFactory mFactory = LedgerManagerFactory.newLedgerManagerFactory(bsConfs.get(0), zkc); final LedgerUnderreplicationManager underReplicationManager = mFactory.newLedgerUnderreplicationManager(); LedgerHandle lh = bkc.createLedger(3, 3, DigestType.CRC32, "passwd".getBytes()); for (int i = 0; i < 10; i++) { lh.asyncAddEntry("foobar".getBytes(), new TestCallbacks.AddCallbackFuture(i), null); } lh.addEntry("foobar".getBytes()); lh.close(); assertEquals("shouldn't be anything under replicated", underReplicationManager.pollLedgerToRereplicate(), -1); underReplicationManager.disableLedgerReplication(); InetSocketAddress auditor = AuditorElector.getCurrentAuditor(baseConf, zkc); ServerConfiguration conf = killBookie(auditor); Thread.sleep(2000); startBookie(conf); Thread.sleep(2000); // give it time to run assertEquals("shouldn't be anything under replicated", -1, underReplicationManager.pollLedgerToRereplicate()); } } AutoRecoveryMainTest.java000066400000000000000000000076151244507361200367520ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.util.concurrent.CountDownLatch; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.junit.Test; /* * Test the AuditorPeer */ public class AutoRecoveryMainTest extends BookKeeperClusterTestCase { public AutoRecoveryMainTest() { super(3); } /* * test the startup of the auditorElector and RW. */ @Test(timeout=60000) public void testStartup() throws Exception { AutoRecoveryMain main = new AutoRecoveryMain(bsConfs.get(0)); try { main.start(); Thread.sleep(500); assertTrue("AuditorElector should be running", main.auditorElector.isRunning()); assertTrue("Replication worker should be running", main.replicationWorker.isRunning()); } finally { main.shutdown(); } } /* * Test the shutdown of all daemons */ @Test(timeout=60000) public void testShutdown() throws Exception { AutoRecoveryMain main = new AutoRecoveryMain(bsConfs.get(0)); main.start(); Thread.sleep(500); assertTrue("AuditorElector should be running", main.auditorElector.isRunning()); assertTrue("Replication worker should be running", main.replicationWorker.isRunning()); main.shutdown(); assertFalse("AuditorElector should not be running", main.auditorElector.isRunning()); assertFalse("Replication worker should not be running", main.replicationWorker.isRunning()); } /** * Test that, if an autorecovery looses its ZK connection/session * it will shutdown. */ @Test(timeout=60000) public void testAutoRecoverySessionLoss() throws Exception { AutoRecoveryMain main1 = new AutoRecoveryMain(bsConfs.get(0)); AutoRecoveryMain main2 = new AutoRecoveryMain(bsConfs.get(1)); main1.start(); main2.start(); Thread.sleep(500); assertTrue("AuditorElectors should be running", main1.auditorElector.isRunning() && main2.auditorElector.isRunning()); assertTrue("Replication workers should be running", main1.replicationWorker.isRunning() && main2.replicationWorker.isRunning()); zkUtil.expireSession(main1.zk); zkUtil.expireSession(main2.zk); for (int i = 0; i < 10; i++) { // give it 10 seconds to shutdown if (!main1.auditorElector.isRunning() && !main2.auditorElector.isRunning() && !main1.replicationWorker.isRunning() && !main2.replicationWorker.isRunning()) { break; } Thread.sleep(1000); } assertFalse("Elector1 should have shutdown", main1.auditorElector.isRunning()); assertFalse("Elector2 should have shutdown", main2.auditorElector.isRunning()); assertFalse("RW1 should have shutdown", main1.replicationWorker.isRunning()); assertFalse("RW2 should have shutdown", main2.replicationWorker.isRunning()); } } BookieAutoRecoveryTest.java000066400000000000000000000463131244507361200372740ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.io.IOException; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.List; import java.util.SortedMap; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerHandleAdapter; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.apache.bookkeeper.meta.ZkLedgerUnderreplicationManager; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.Watcher.Event.EventType; import org.apache.zookeeper.data.Stat; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Integration tests verifies the complete functionality of the * Auditor-rereplication process: Auditor will publish the bookie failures, * consequently ReplicationWorker will get the notifications and act on it. */ public class BookieAutoRecoveryTest extends BookKeeperClusterTestCase { private static final Logger LOG = LoggerFactory .getLogger(BookieAutoRecoveryTest.class); private static final byte[] PASSWD = "admin".getBytes(); private static final byte[] data = "TESTDATA".getBytes(); private static final String openLedgerRereplicationGracePeriod = "3000"; // milliseconds private DigestType digestType; private LedgerManagerFactory mFactory; private LedgerUnderreplicationManager underReplicationManager; private LedgerManager ledgerManager; private final String UNDERREPLICATED_PATH = baseClientConf .getZkLedgersRootPath() + "/underreplication/ledgers"; public BookieAutoRecoveryTest() throws IOException, KeeperException, InterruptedException, UnavailableException, CompatibilityException { super(3); baseConf.setLedgerManagerFactoryClassName( "org.apache.bookkeeper.meta.HierarchicalLedgerManagerFactory"); baseConf.setOpenLedgerRereplicationGracePeriod(openLedgerRereplicationGracePeriod); baseClientConf.setLedgerManagerFactoryClassName( "org.apache.bookkeeper.meta.HierarchicalLedgerManagerFactory"); this.digestType = DigestType.MAC; setAutoRecoveryEnabled(true); } @Override public void setUp() throws Exception { super.setUp(); baseConf.setZkServers(zkUtil.getZooKeeperConnectString()); // initialize urReplicationManager mFactory = LedgerManagerFactory.newLedgerManagerFactory(baseClientConf, zkc); underReplicationManager = mFactory.newLedgerUnderreplicationManager(); LedgerManagerFactory newLedgerManagerFactory = LedgerManagerFactory .newLedgerManagerFactory(baseClientConf, zkc); ledgerManager = newLedgerManagerFactory.newLedgerManager(); } @Override public void tearDown() throws Exception { super.tearDown(); if (null != mFactory) { mFactory.uninitialize(); mFactory = null; } if (null != underReplicationManager) { underReplicationManager.close(); underReplicationManager = null; } if (null != ledgerManager) { ledgerManager.close(); ledgerManager = null; } } /** * Test verifies publish urLedger by Auditor and replication worker is * picking up the entries and finishing the rereplication of open ledger */ @Test(timeout = 90000) public void testOpenLedgers() throws Exception { List listOfLedgerHandle = createLedgersAndAddEntries(1, 5); LedgerHandle lh = listOfLedgerHandle.get(0); int ledgerReplicaIndex = 0; InetSocketAddress replicaToKillAddr = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(0); final String urLedgerZNode = getUrLedgerZNode(lh); ledgerReplicaIndex = getReplicaIndexInLedger(lh, replicaToKillAddr); CountDownLatch latch = new CountDownLatch(1); assertNull("UrLedger already exists!", watchUrLedgerNode(urLedgerZNode, latch)); LOG.info("Killing Bookie :" + replicaToKillAddr); killBookie(replicaToKillAddr); // waiting to publish urLedger znode by Auditor latch.await(); latch = new CountDownLatch(1); LOG.info("Watching on urLedgerPath:" + urLedgerZNode + " to know the status of rereplication process"); assertNotNull("UrLedger doesn't exists!", watchUrLedgerNode(urLedgerZNode, latch)); // starting the replication service, so that he will be able to act as // target bookie startNewBookie(); int newBookieIndex = bs.size() - 1; BookieServer newBookieServer = bs.get(newBookieIndex); LOG.debug("Waiting to finish the replication of failed bookie : " + replicaToKillAddr); latch.await(); // grace period to update the urledger metadata in zookeeper LOG.info("Waiting to update the urledger metadata in zookeeper"); verifyLedgerEnsembleMetadataAfterReplication(newBookieServer, listOfLedgerHandle.get(0), ledgerReplicaIndex); } /** * Test verifies publish urLedger by Auditor and replication worker is * picking up the entries and finishing the rereplication of closed ledgers */ @Test(timeout = 90000) public void testClosedLedgers() throws Exception { List listOfReplicaIndex = new ArrayList(); List listOfLedgerHandle = createLedgersAndAddEntries(1, 5); closeLedgers(listOfLedgerHandle); LedgerHandle lhandle = listOfLedgerHandle.get(0); int ledgerReplicaIndex = 0; InetSocketAddress replicaToKillAddr = LedgerHandleAdapter .getLedgerMetadata(lhandle).getEnsembles().get(0L).get(0); CountDownLatch latch = new CountDownLatch(listOfLedgerHandle.size()); for (LedgerHandle lh : listOfLedgerHandle) { ledgerReplicaIndex = getReplicaIndexInLedger(lh, replicaToKillAddr); listOfReplicaIndex.add(ledgerReplicaIndex); assertNull("UrLedger already exists!", watchUrLedgerNode(getUrLedgerZNode(lh), latch)); } LOG.info("Killing Bookie :" + replicaToKillAddr); killBookie(replicaToKillAddr); // waiting to publish urLedger znode by Auditor latch.await(); // Again watching the urLedger znode to know the replication status latch = new CountDownLatch(listOfLedgerHandle.size()); for (LedgerHandle lh : listOfLedgerHandle) { String urLedgerZNode = getUrLedgerZNode(lh); LOG.info("Watching on urLedgerPath:" + urLedgerZNode + " to know the status of rereplication process"); assertNotNull("UrLedger doesn't exists!", watchUrLedgerNode(urLedgerZNode, latch)); } // starting the replication service, so that he will be able to act as // target bookie startNewBookie(); int newBookieIndex = bs.size() - 1; BookieServer newBookieServer = bs.get(newBookieIndex); LOG.debug("Waiting to finish the replication of failed bookie : " + replicaToKillAddr); // waiting to finish replication latch.await(); // grace period to update the urledger metadata in zookeeper LOG.info("Waiting to update the urledger metadata in zookeeper"); for (int index = 0; index < listOfLedgerHandle.size(); index++) { verifyLedgerEnsembleMetadataAfterReplication(newBookieServer, listOfLedgerHandle.get(index), listOfReplicaIndex.get(index)); } } /** * Test stopping replica service while replication in progress. Considering * when there is an exception will shutdown Auditor and RW processes. After * restarting should be able to finish the re-replication activities */ @Test(timeout = 90000) public void testStopWhileReplicationInProgress() throws Exception { int numberOfLedgers = 2; List listOfReplicaIndex = new ArrayList(); List listOfLedgerHandle = createLedgersAndAddEntries( numberOfLedgers, 5); closeLedgers(listOfLedgerHandle); LedgerHandle handle = listOfLedgerHandle.get(0); InetSocketAddress replicaToKillAddr = LedgerHandleAdapter .getLedgerMetadata(handle).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie:" + replicaToKillAddr); // Each ledger, there will be two events : create urLedger and after // rereplication delete urLedger CountDownLatch latch = new CountDownLatch(listOfLedgerHandle.size()); for (int i = 0; i < listOfLedgerHandle.size(); i++) { final String urLedgerZNode = getUrLedgerZNode(listOfLedgerHandle .get(i)); assertNull("UrLedger already exists!", watchUrLedgerNode(urLedgerZNode, latch)); int replicaIndexInLedger = getReplicaIndexInLedger( listOfLedgerHandle.get(i), replicaToKillAddr); listOfReplicaIndex.add(replicaIndexInLedger); } LOG.info("Killing Bookie :" + replicaToKillAddr); killBookie(replicaToKillAddr); // waiting to publish urLedger znode by Auditor latch.await(); // Again watching the urLedger znode to know the replication status latch = new CountDownLatch(listOfLedgerHandle.size()); for (LedgerHandle lh : listOfLedgerHandle) { String urLedgerZNode = getUrLedgerZNode(lh); LOG.info("Watching on urLedgerPath:" + urLedgerZNode + " to know the status of rereplication process"); assertNotNull("UrLedger doesn't exists!", watchUrLedgerNode(urLedgerZNode, latch)); } // starting the replication service, so that he will be able to act as // target bookie startNewBookie(); int newBookieIndex = bs.size() - 1; BookieServer newBookieServer = bs.get(newBookieIndex); LOG.debug("Waiting to finish the replication of failed bookie : " + replicaToKillAddr); while (true) { if (latch.getCount() < numberOfLedgers || latch.getCount() <= 0) { stopReplicationService(); LOG.info("Latch Count is:" + latch.getCount()); break; } // grace period to take breath Thread.sleep(1000); } startReplicationService(); LOG.info("Waiting to finish rereplication processes"); latch.await(); // grace period to update the urledger metadata in zookeeper LOG.info("Waiting to update the urledger metadata in zookeeper"); for (int index = 0; index < listOfLedgerHandle.size(); index++) { verifyLedgerEnsembleMetadataAfterReplication(newBookieServer, listOfLedgerHandle.get(index), listOfReplicaIndex.get(index)); } } /** * Verify the published urledgers of deleted ledgers(those ledgers where * deleted after publishing as urledgers by Auditor) should be cleared off * by the newly selected replica bookie */ @Test(timeout = 30000) public void testNoSuchLedgerExists() throws Exception { List listOfLedgerHandle = createLedgersAndAddEntries(2, 5); CountDownLatch latch = new CountDownLatch(listOfLedgerHandle.size()); for (LedgerHandle lh : listOfLedgerHandle) { assertNull("UrLedger already exists!", watchUrLedgerNode(getUrLedgerZNode(lh), latch)); } InetSocketAddress replicaToKillAddr = LedgerHandleAdapter .getLedgerMetadata(listOfLedgerHandle.get(0)).getEnsembles() .get(0L).get(0); killBookie(replicaToKillAddr); replicaToKillAddr = LedgerHandleAdapter .getLedgerMetadata(listOfLedgerHandle.get(0)).getEnsembles() .get(0L).get(0); killBookie(replicaToKillAddr); // waiting to publish urLedger znode by Auditor latch.await(); latch = new CountDownLatch(listOfLedgerHandle.size()); for (LedgerHandle lh : listOfLedgerHandle) { assertNotNull("UrLedger doesn't exists!", watchUrLedgerNode(getUrLedgerZNode(lh), latch)); } // delete ledgers for (LedgerHandle lh : listOfLedgerHandle) { bkc.deleteLedger(lh.getId()); } startNewBookie(); // waiting to delete published urledgers, since it doesn't exists latch.await(); for (LedgerHandle lh : listOfLedgerHandle) { assertNull("UrLedger still exists after rereplication", watchUrLedgerNode(getUrLedgerZNode(lh), latch)); } } /** * Test that if a empty ledger loses the bookie not in the quorum for entry 0, it will * still be openable when it loses enough bookies to lose a whole quorum. */ @Test(timeout=10000) public void testEmptyLedgerLosesQuorumEventually() throws Exception { LedgerHandle lh = bkc.createLedger(3, 2, 2, DigestType.CRC32, PASSWD); CountDownLatch latch = new CountDownLatch(1); String urZNode = getUrLedgerZNode(lh); watchUrLedgerNode(urZNode, latch); InetSocketAddress replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(2); LOG.info("Killing last bookie, {}, in ensemble {}", replicaToKill, LedgerHandleAdapter.getLedgerMetadata(lh).getEnsembles().get(0L)); killBookie(replicaToKill); getAuditor(10, TimeUnit.SECONDS).submitAuditTask().get(); // ensure auditor runs assertTrue("Should be marked as underreplicated", latch.await(5, TimeUnit.SECONDS)); latch = new CountDownLatch(1); Stat s = watchUrLedgerNode(urZNode, latch); // should be marked as replicated if (s != null) { assertTrue("Should be marked as replicated", latch.await(10, TimeUnit.SECONDS)); } replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(1); LOG.info("Killing second bookie, {}, in ensemble {}", replicaToKill, LedgerHandleAdapter.getLedgerMetadata(lh).getEnsembles().get(0L)); killBookie(replicaToKill); getAuditor(10, TimeUnit.SECONDS).submitAuditTask().get(); // ensure auditor runs assertTrue("Should be marked as underreplicated", latch.await(5, TimeUnit.SECONDS)); latch = new CountDownLatch(1); s = watchUrLedgerNode(urZNode, latch); // should be marked as replicated if (s != null) { assertTrue("Should be marked as replicated", latch.await(5, TimeUnit.SECONDS)); } // should be able to open ledger without issue bkc.openLedger(lh.getId(), DigestType.CRC32, PASSWD); } private int getReplicaIndexInLedger(LedgerHandle lh, InetSocketAddress replicaToKill) { SortedMap> ensembles = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles(); int ledgerReplicaIndex = -1; for (InetSocketAddress addr : ensembles.get(0L)) { ++ledgerReplicaIndex; if (addr.equals(replicaToKill)) { break; } } return ledgerReplicaIndex; } private void verifyLedgerEnsembleMetadataAfterReplication( BookieServer newBookieServer, LedgerHandle lh, int ledgerReplicaIndex) throws Exception { LedgerHandle openLedger = bkc .openLedger(lh.getId(), digestType, PASSWD); InetSocketAddress inetSocketAddress = LedgerHandleAdapter .getLedgerMetadata(openLedger).getEnsembles().get(0L) .get(ledgerReplicaIndex); assertEquals("Rereplication has been failed and ledgerReplicaIndex :" + ledgerReplicaIndex, newBookieServer.getLocalAddress(), inetSocketAddress); } private void closeLedgers(List listOfLedgerHandle) throws InterruptedException, BKException { for (LedgerHandle lh : listOfLedgerHandle) { lh.close(); } } private List createLedgersAndAddEntries(int numberOfLedgers, int numberOfEntries) throws InterruptedException, BKException { List listOfLedgerHandle = new ArrayList( numberOfLedgers); for (int index = 0; index < numberOfLedgers; index++) { LedgerHandle lh = bkc.createLedger(3, 3, digestType, PASSWD); listOfLedgerHandle.add(lh); for (int i = 0; i < numberOfEntries; i++) { lh.addEntry(data); } } return listOfLedgerHandle; } private String getUrLedgerZNode(LedgerHandle lh) { return ZkLedgerUnderreplicationManager.getUrLedgerZnode( UNDERREPLICATED_PATH, lh.getId()); } private Stat watchUrLedgerNode(final String znode, final CountDownLatch latch) throws KeeperException, InterruptedException { return zkc.exists(znode, new Watcher() { @Override public void process(WatchedEvent event) { if (event.getType() == EventType.NodeDeleted) { LOG.info("Recieved Ledger rereplication completion event :" + event.getType()); latch.countDown(); } if (event.getType() == EventType.NodeCreated) { LOG.info("Recieved urLedger publishing event :" + event.getType()); latch.countDown(); } } }); } } BookieLedgerIndexTest.java000066400000000000000000000210601244507361200370270ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.replication; import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Collection; import java.util.List; import java.util.Map; import java.util.Random; import java.util.Set; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.meta.LedgerManager; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.MSLedgerManagerFactory; import org.apache.bookkeeper.replication.ReplicationException.BKAuditException; import org.apache.bookkeeper.test.MultiLedgerManagerTestCase; import org.apache.commons.io.FileUtils; import org.apache.zookeeper.KeeperException; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests verifies bookie vs ledger mapping generating by the BookieLedgerIndexer */ public class BookieLedgerIndexTest extends MultiLedgerManagerTestCase { // Depending on the taste, select the amount of logging // by decommenting one of the two lines below // static Logger LOG = Logger.getRootLogger(); private static final Logger LOG = LoggerFactory .getLogger(BookieLedgerIndexTest.class); private Random rng; // Random Number Generator private ArrayList entries; // generated entries private final DigestType digestType = DigestType.CRC32; private int numberOfLedgers = 3; private List ledgerList; private LedgerManagerFactory newLedgerManagerFactory; private LedgerManager ledgerManager; public BookieLedgerIndexTest(String ledgerManagerFactory) throws IOException, KeeperException, InterruptedException { super(3); LOG.info("Running test case using ledger manager : " + ledgerManagerFactory); // set ledger manager name baseConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); baseClientConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); } @Before public void setUp() throws Exception { super.setUp(); rng = new Random(System.currentTimeMillis()); // Initialize the Random // Number Generator entries = new ArrayList(); // initialize the entries list ledgerList = new ArrayList(3); // initialize ledger manager newLedgerManagerFactory = LedgerManagerFactory.newLedgerManagerFactory( baseConf, zkc); ledgerManager = newLedgerManagerFactory.newLedgerManager(); } @After public void tearDown() throws Exception { super.tearDown(); if (null != newLedgerManagerFactory) { newLedgerManagerFactory.uninitialize(); newLedgerManagerFactory = null; } if (null != ledgerManager) { ledgerManager.close(); ledgerManager = null; } } /** * Verify the bookie-ledger mapping with minimum number of bookies and few * ledgers */ @Test(timeout=60000) public void testSimpleBookieLedgerMapping() throws Exception { for (int i = 0; i < numberOfLedgers; i++) { createAndAddEntriesToLedger().close(); } BookieLedgerIndexer bookieLedgerIndex = new BookieLedgerIndexer( ledgerManager); Map> bookieToLedgerIndex = bookieLedgerIndex .getBookieToLedgerIndex(); assertEquals("Missed few bookies in the bookie-ledger mapping!", 3, bookieToLedgerIndex.size()); Collection> bk2ledgerEntry = bookieToLedgerIndex.values(); for (Set ledgers : bk2ledgerEntry) { assertEquals("Missed few ledgers in the bookie-ledger mapping!", 3, ledgers.size()); for (Long ledgerId : ledgers) { assertTrue("Unknown ledger-bookie mapping", ledgerList .contains(ledgerId)); } } } /** * Verify ledger index with failed bookies and throws exception */ @Test(timeout=60000) public void testWithoutZookeeper() throws Exception { // This test case is for ledger metadata that stored in ZooKeeper. As // far as MSLedgerManagerFactory, ledger metadata are stored in other // storage. So this test is not suitable for MSLedgerManagerFactory. if (newLedgerManagerFactory instanceof MSLedgerManagerFactory) { return; } for (int i = 0; i < numberOfLedgers; i++) { createAndAddEntriesToLedger().close(); } BookieLedgerIndexer bookieLedgerIndex = new BookieLedgerIndexer( ledgerManager); stopZKCluster(); try { bookieLedgerIndex.getBookieToLedgerIndex(); fail("Must throw exception as zookeeper are not running!"); } catch (BKAuditException bkAuditException) { // expected behaviour } } /** * Verify indexing with multiple ensemble reformation */ @Test(timeout=60000) public void testEnsembleReformation() throws Exception { try { LedgerHandle lh1 = createAndAddEntriesToLedger(); LedgerHandle lh2 = createAndAddEntriesToLedger(); startNewBookie(); shutdownBookie(bs.size() - 2); // add few more entries after ensemble reformation for (int i = 0; i < 10; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(Integer.MAX_VALUE)); entry.position(0); entries.add(entry.array()); lh1.addEntry(entry.array()); lh2.addEntry(entry.array()); } BookieLedgerIndexer bookieLedgerIndex = new BookieLedgerIndexer( ledgerManager); Map> bookieToLedgerIndex = bookieLedgerIndex .getBookieToLedgerIndex(); assertEquals("Missed few bookies in the bookie-ledger mapping!", 4, bookieToLedgerIndex.size()); Collection> bk2ledgerEntry = bookieToLedgerIndex.values(); for (Set ledgers : bk2ledgerEntry) { assertEquals( "Missed few ledgers in the bookie-ledger mapping!", 2, ledgers.size()); for (Long ledgerNode : ledgers) { assertTrue("Unknown ledger-bookie mapping", ledgerList .contains(ledgerNode)); } } } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } private void shutdownBookie(int bkShutdownIndex) throws IOException { bs.remove(bkShutdownIndex).shutdown(); File f = tmpDirs.remove(bkShutdownIndex); FileUtils.deleteDirectory(f); } private LedgerHandle createAndAddEntriesToLedger() throws BKException, InterruptedException { int numEntriesToWrite = 20; // Create a ledger LedgerHandle lh = bkc.createLedger(digestType, "admin".getBytes()); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(Integer.MAX_VALUE)); entry.position(0); entries.add(entry.array()); lh.addEntry(entry.array()); } ledgerList.add(lh.getId()); return lh; } } ReplicationTestUtil.java000066400000000000000000000040601244507361200366140ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.util.List; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; /** Utility class for replication tests */ public class ReplicationTestUtil { /** Checks whether ledger is in under-replication */ static boolean isLedgerInUnderReplication(ZooKeeper zkc, long id, String basePath) throws KeeperException, InterruptedException { List children; try { children = zkc.getChildren(basePath, true); } catch (KeeperException.NoNodeException nne) { return false; } boolean isMatched = false; for (String child : children) { if (child.startsWith("urL") && child.contains(String.valueOf(id))) { isMatched = true; break; } else { String path = basePath + '/' + child; try { if (zkc.getChildren(path, false).size() > 0) { isMatched = isLedgerInUnderReplication(zkc, id, path); } } catch (KeeperException.NoNodeException nne) { return false; } } } return isMatched; } } TestAutoRecoveryAlongWithBookieServers.java000066400000000000000000000075041244507361200424620ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.net.InetAddress; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Enumeration; import java.util.Set; import java.util.Map.Entry; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerHandleAdapter; import org.apache.bookkeeper.test.BookKeeperClusterTestCase; import org.apache.bookkeeper.util.BookKeeperConstants; import org.junit.Test; public class TestAutoRecoveryAlongWithBookieServers extends BookKeeperClusterTestCase { private String basePath = ""; public TestAutoRecoveryAlongWithBookieServers() { super(3); baseConf.setAutoRecoveryDaemonEnabled(true); basePath = baseClientConf.getZkLedgersRootPath() + '/' + BookKeeperConstants.UNDER_REPLICATION_NODE + BookKeeperConstants.DEFAULT_ZK_LEDGERS_ROOT_PATH; } /** Tests that the auto recovery service along with Bookie servers itself */ @Test(timeout = 60000) public void testAutoRecoveryAlongWithBookieServers() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, "testpasswd".getBytes()); byte[] testData = "testBuiltAutoRecovery".getBytes(); for (int i = 0; i < 10; i++) { lh.addEntry(testData); } lh.close(); InetSocketAddress replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(0); killBookie(replicaToKill); int startNewBookie = startNewBookie(); InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh.getId(), basePath)) { Thread.sleep(100); } // Killing all bookies except newly replicated bookie Set>> entrySet = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().entrySet(); for (Entry> entry : entrySet) { ArrayList bookies = entry.getValue(); for (InetSocketAddress bookie : bookies) { if (bookie.equals(newBkAddr)) { continue; } killBookie(bookie); } } // Should be able to read the entries from 0-9 LedgerHandle lhs = bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, "testpasswd".getBytes()); Enumeration entries = lhs.readEntries(0, 9); assertTrue("Should have the elements", entries.hasMoreElements()); while (entries.hasMoreElements()) { LedgerEntry entry = entries.nextElement(); assertEquals("testBuiltAutoRecovery", new String(entry.getEntry())); } } } TestLedgerUnderreplicationManager.java000066400000000000000000000705261244507361200414440ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; import java.nio.charset.Charset; import java.util.ArrayList; import java.util.Collection; import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Set; import java.util.concurrent.Callable; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.apache.bookkeeper.meta.ZkLedgerUnderreplicationManager; import org.apache.bookkeeper.proto.DataFormats.UnderreplicatedLedgerFormat; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.bookkeeper.test.ZooKeeperUtil; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.commons.lang.StringUtils; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.Watcher.Event.EventType; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.TextFormat; /** * Test the zookeeper implementation of the ledger replication manager */ public class TestLedgerUnderreplicationManager { static final Logger LOG = LoggerFactory.getLogger(TestLedgerUnderreplicationManager.class); ZooKeeperUtil zkUtil = null; ServerConfiguration conf = null; ExecutorService executor = null; LedgerManagerFactory lmf1 = null; LedgerManagerFactory lmf2 = null; ZooKeeper zkc1 = null; ZooKeeper zkc2 = null; String basePath; String urLedgerPath; boolean isLedgerReplicationDisabled = true; @Before public void setupZooKeeper() throws Exception { zkUtil = new ZooKeeperUtil(); zkUtil.startServer(); conf = new ServerConfiguration() .setAllowLoopback(true) .setZkServers(zkUtil.getZooKeeperConnectString()); executor = Executors.newCachedThreadPool(); ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(10000); zkc1 = ZkUtils.createConnectedZookeeperClient( zkUtil.getZooKeeperConnectString(), w); w = new ZooKeeperWatcherBase(10000); zkc2 = ZkUtils.createConnectedZookeeperClient( zkUtil.getZooKeeperConnectString(), w); lmf1 = LedgerManagerFactory.newLedgerManagerFactory(conf, zkc1); lmf2 = LedgerManagerFactory.newLedgerManagerFactory(conf, zkc2); basePath = conf.getZkLedgersRootPath() + '/' + BookKeeperConstants.UNDER_REPLICATION_NODE; urLedgerPath = basePath + BookKeeperConstants.DEFAULT_ZK_LEDGERS_ROOT_PATH; } @After public void teardownZooKeeper() throws Exception { if (zkUtil != null) { zkUtil.killServer(); zkUtil = null; } if (executor != null) { executor = null; } if (zkc1 != null) { zkc1.close(); zkc1 = null; } if (zkc2 != null) { zkc2.close(); zkc2 = null; } if (lmf1 != null) { lmf1.uninitialize(); lmf1 = null; } if (lmf2 != null) { lmf2.uninitialize(); lmf2 = null; } } private Future getLedgerToReplicate(final LedgerUnderreplicationManager m) { return executor.submit(new Callable() { public Long call() { try { return m.getLedgerToRereplicate(); } catch (Exception e) { LOG.error("Error getting ledger id", e); return -1L; } } }); } /** * Test basic interactions with the ledger underreplication * manager. * Mark some ledgers as underreplicated. * Ensure that getLedgerToReplicate will block until it a ledger * becomes available. */ @Test(timeout=60000) public void testBasicInteraction() throws Exception { Set ledgers = new HashSet(); ledgers.add(0xdeadbeefL); ledgers.add(0xbeefcafeL); ledgers.add(0xffffbeefL); ledgers.add(0xfacebeefL); String missingReplica = "localhost:3181"; int count = 0; LedgerUnderreplicationManager m = lmf1.newLedgerUnderreplicationManager(); Iterator iter = ledgers.iterator(); while (iter.hasNext()) { m.markLedgerUnderreplicated(iter.next(), missingReplica); count++; } List> futures = new ArrayList>(); for (int i = 0; i < count; i++) { futures.add(getLedgerToReplicate(m)); } for (Future f : futures) { Long l = f.get(5, TimeUnit.SECONDS); assertTrue(ledgers.remove(l)); } Future f = getLedgerToReplicate(m); try { f.get(5, TimeUnit.SECONDS); fail("Shouldn't be able to find a ledger to replicate"); } catch (TimeoutException te) { // correct behaviour } Long newl = 0xfefefefefefeL; m.markLedgerUnderreplicated(newl, missingReplica); assertEquals("Should have got the one just added", newl, f.get(5, TimeUnit.SECONDS)); } /** * Test locking for ledger unreplication manager. * If there's only one ledger marked for rereplication, * and one client has it, it should be locked; another * client shouldn't be able to get it. If the first client dies * however, the second client should be able to get it. */ @Test(timeout=60000) public void testLocking() throws Exception { String missingReplica = "localhost:3181"; LedgerUnderreplicationManager m1 = lmf1.newLedgerUnderreplicationManager(); LedgerUnderreplicationManager m2 = lmf2.newLedgerUnderreplicationManager(); Long ledger = 0xfeadeefdacL; m1.markLedgerUnderreplicated(ledger, missingReplica); Future f = getLedgerToReplicate(m1); Long l = f.get(5, TimeUnit.SECONDS); assertEquals("Should be the ledger I just marked", ledger, l); f = getLedgerToReplicate(m2); try { f.get(5, TimeUnit.SECONDS); fail("Shouldn't be able to find a ledger to replicate"); } catch (TimeoutException te) { // correct behaviour } zkc1.close(); // should kill the lock zkc1 = null; l = f.get(5, TimeUnit.SECONDS); assertEquals("Should be the ledger I marked", ledger, l); } /** * Test that when a ledger has been marked as replicated, it * will not be offered to anther client. * This test checked that by marking two ledgers, and acquiring * them on a single client. It marks one as replicated and then * the client is killed. We then check that another client can * acquire a ledger, and that it's not the one that was previously * marked as replicated. */ @Test(timeout=60000) public void testMarkingAsReplicated() throws Exception { String missingReplica = "localhost:3181"; LedgerUnderreplicationManager m1 = lmf1.newLedgerUnderreplicationManager(); LedgerUnderreplicationManager m2 = lmf2.newLedgerUnderreplicationManager(); Long ledgerA = 0xfeadeefdacL; Long ledgerB = 0xdefadebL; m1.markLedgerUnderreplicated(ledgerA, missingReplica); m1.markLedgerUnderreplicated(ledgerB, missingReplica); Future fA = getLedgerToReplicate(m1); Future fB = getLedgerToReplicate(m1); Long lA = fA.get(5, TimeUnit.SECONDS); Long lB = fB.get(5, TimeUnit.SECONDS); assertTrue("Should be the ledgers I just marked", (lA.equals(ledgerA) && lB.equals(ledgerB)) || (lA.equals(ledgerB) && lB.equals(ledgerA))); Future f = getLedgerToReplicate(m2); try { f.get(5, TimeUnit.SECONDS); fail("Shouldn't be able to find a ledger to replicate"); } catch (TimeoutException te) { // correct behaviour } m1.markLedgerReplicated(lA); zkc1.close(); // should kill the lock zkc1 = null; Long l = f.get(5, TimeUnit.SECONDS); assertEquals("Should be the ledger I marked", lB, l); } /** * Test releasing of a ledger * A ledger is released when a client decides it does not want * to replicate it (or cannot at the moment). * When a client releases a previously acquired ledger, another * client should then be able to acquire it. */ @Test(timeout=60000) public void testRelease() throws Exception { String missingReplica = "localhost:3181"; LedgerUnderreplicationManager m1 = lmf1.newLedgerUnderreplicationManager(); LedgerUnderreplicationManager m2 = lmf2.newLedgerUnderreplicationManager(); Long ledgerA = 0xfeadeefdacL; Long ledgerB = 0xdefadebL; m1.markLedgerUnderreplicated(ledgerA, missingReplica); m1.markLedgerUnderreplicated(ledgerB, missingReplica); Future fA = getLedgerToReplicate(m1); Future fB = getLedgerToReplicate(m1); Long lA = fA.get(5, TimeUnit.SECONDS); Long lB = fB.get(5, TimeUnit.SECONDS); assertTrue("Should be the ledgers I just marked", (lA.equals(ledgerA) && lB.equals(ledgerB)) || (lA.equals(ledgerB) && lB.equals(ledgerA))); Future f = getLedgerToReplicate(m2); try { f.get(5, TimeUnit.SECONDS); fail("Shouldn't be able to find a ledger to replicate"); } catch (TimeoutException te) { // correct behaviour } m1.markLedgerReplicated(lA); m1.releaseUnderreplicatedLedger(lB); Long l = f.get(5, TimeUnit.SECONDS); assertEquals("Should be the ledger I marked", lB, l); } /** * Test that when a failure occurs on a ledger, while the ledger * is already being rereplicated, the ledger will still be in the * under replicated ledger list when first rereplicating client marks * it as replicated. */ @Test(timeout=60000) public void testManyFailures() throws Exception { String missingReplica1 = "localhost:3181"; String missingReplica2 = "localhost:3182"; LedgerUnderreplicationManager m1 = lmf1.newLedgerUnderreplicationManager(); Long ledgerA = 0xfeadeefdacL; m1.markLedgerUnderreplicated(ledgerA, missingReplica1); Future fA = getLedgerToReplicate(m1); Long lA = fA.get(5, TimeUnit.SECONDS); m1.markLedgerUnderreplicated(ledgerA, missingReplica2); assertEquals("Should be the ledger I just marked", lA, ledgerA); m1.markLedgerReplicated(lA); Future f = getLedgerToReplicate(m1); lA = f.get(5, TimeUnit.SECONDS); assertEquals("Should be the ledger I had marked previously", lA, ledgerA); } /** * Test that when a ledger is marked as underreplicated with * the same missing replica twice, only marking as replicated * will be enough to remove it from the list. */ @Test(timeout=60000) public void test2reportSame() throws Exception { String missingReplica1 = "localhost:3181"; LedgerUnderreplicationManager m1 = lmf1.newLedgerUnderreplicationManager(); LedgerUnderreplicationManager m2 = lmf2.newLedgerUnderreplicationManager(); Long ledgerA = 0xfeadeefdacL; m1.markLedgerUnderreplicated(ledgerA, missingReplica1); m2.markLedgerUnderreplicated(ledgerA, missingReplica1); // verify duplicate missing replica UnderreplicatedLedgerFormat.Builder builderA = UnderreplicatedLedgerFormat .newBuilder(); String znode = getUrLedgerZnode(ledgerA); byte[] data = zkc1.getData(znode, false, null); TextFormat.merge(new String(data, Charset.forName("UTF-8")), builderA); List replicaList = builderA.getReplicaList(); assertEquals("Published duplicate missing replica : " + replicaList, 1, replicaList.size()); assertTrue("Published duplicate missing replica : " + replicaList, replicaList.contains(missingReplica1)); Future fA = getLedgerToReplicate(m1); Long lA = fA.get(5, TimeUnit.SECONDS); assertEquals("Should be the ledger I just marked", lA, ledgerA); m1.markLedgerReplicated(lA); Future f = getLedgerToReplicate(m2); try { f.get(5, TimeUnit.SECONDS); fail("Shouldn't be able to find a ledger to replicate"); } catch (TimeoutException te) { // correct behaviour } } /** * Test that multiple LedgerUnderreplicationManagers should be able to take * lock and release for same ledger */ @Test(timeout = 30000) public void testMultipleManagersShouldBeAbleToTakeAndReleaseLock() throws Exception { String missingReplica1 = "localhost:3181"; final LedgerUnderreplicationManager m1 = lmf1 .newLedgerUnderreplicationManager(); final LedgerUnderreplicationManager m2 = lmf2 .newLedgerUnderreplicationManager(); Long ledgerA = 0xfeadeefdacL; m1.markLedgerUnderreplicated(ledgerA, missingReplica1); final int iterationCount = 100; final CountDownLatch latch1 = new CountDownLatch(iterationCount); final CountDownLatch latch2 = new CountDownLatch(iterationCount); Thread thread1 = new Thread() { @Override public void run() { takeLedgerAndRelease(m1, latch1, iterationCount); } }; Thread thread2 = new Thread() { @Override public void run() { takeLedgerAndRelease(m2, latch2, iterationCount); } }; thread1.start(); thread2.start(); // wait until at least one thread completed while (!latch1.await(50, TimeUnit.MILLISECONDS) && !latch2.await(50, TimeUnit.MILLISECONDS)) { Thread.sleep(50); } m1.close(); m2.close(); // After completing 'lock acquire,release' job, it should notify below // wait latch1.await(); latch2.await(); } /** * Test verifies failures of bookies which are resembling each other. * * BK servers named like********************************************* * 1.cluster.com, 2.cluster.com, 11.cluster.com, 12.cluster.com * ******************************************************************* * * BKserver IP:HOST like********************************************* * localhost:3181, localhost:318, localhost:31812 * ******************************************************************* */ @Test(timeout=60000) public void testMarkSimilarMissingReplica() throws Exception { List missingReplica = new ArrayList(); missingReplica.add("localhost:3181"); missingReplica.add("localhost:318"); missingReplica.add("localhost:31812"); missingReplica.add("1.cluster.com"); missingReplica.add("2.cluster.com"); missingReplica.add("11.cluster.com"); missingReplica.add("12.cluster.com"); verifyMarkLedgerUnderreplicated(missingReplica); } /** * Test multiple bookie failures for a ledger and marked as underreplicated * one after another. */ @Test(timeout=60000) public void testManyFailuresInAnEnsemble() throws Exception { List missingReplica = new ArrayList(); missingReplica.add("localhost:3181"); missingReplica.add("localhost:3182"); verifyMarkLedgerUnderreplicated(missingReplica); } /** * Test disabling the ledger re-replication. After disabling, it will not be * able to getLedgerToRereplicate(). This calls will enter into infinite * waiting until enabling rereplication process */ @Test(timeout = 20000) public void testDisableLedegerReplication() throws Exception { final LedgerUnderreplicationManager replicaMgr = lmf1 .newLedgerUnderreplicationManager(); // simulate few urLedgers before disabling final Long ledgerA = 0xfeadeefdacL; final String missingReplica = "localhost:3181"; // disabling replication replicaMgr.disableLedgerReplication(); LOG.info("Disabled Ledeger Replication"); try { replicaMgr.markLedgerUnderreplicated(ledgerA, missingReplica); } catch (UnavailableException e) { LOG.debug("Unexpected exception while marking urLedger", e); fail("Unexpected exception while marking urLedger" + e.getMessage()); } Future fA = getLedgerToReplicate(replicaMgr); try { fA.get(5, TimeUnit.SECONDS); fail("Shouldn't be able to find a ledger to replicate"); } catch (TimeoutException te) { // expected behaviour, as the replication is disabled isLedgerReplicationDisabled = false; } assertTrue("Ledger replication is not disabled!", !isLedgerReplicationDisabled); } /** * Test enabling the ledger re-replication. After enableLedegerReplication, * should continue getLedgerToRereplicate() task */ @Test(timeout = 20000) public void testEnableLedegerReplication() throws Exception { isLedgerReplicationDisabled = true; final LedgerUnderreplicationManager replicaMgr = lmf1 .newLedgerUnderreplicationManager(); // simulate few urLedgers before disabling final Long ledgerA = 0xfeadeefdacL; final String missingReplica = "localhost:3181"; try { replicaMgr.markLedgerUnderreplicated(ledgerA, missingReplica); } catch (UnavailableException e) { LOG.debug("Unexpected exception while marking urLedger", e); fail("Unexpected exception while marking urLedger" + e.getMessage()); } // disabling replication replicaMgr.disableLedgerReplication(); LOG.debug("Disabled Ledeger Replication"); String znodeA = getUrLedgerZnode(ledgerA); final CountDownLatch znodeLatch = new CountDownLatch(2); String urledgerA = StringUtils.substringAfterLast(znodeA, "/"); String urLockLedgerA = basePath + "/locks/" + urledgerA; zkc1.exists(urLockLedgerA, new Watcher(){ @Override public void process(WatchedEvent event) { if (event.getType() == EventType.NodeCreated) { znodeLatch.countDown(); LOG.debug("Recieved node creation event for the zNodePath:" + event.getPath()); } }}); // getLedgerToRereplicate is waiting until enable rereplication Thread thread1 = new Thread() { @Override public void run() { try { Long lA = replicaMgr.getLedgerToRereplicate(); assertEquals("Should be the ledger I just marked", lA, ledgerA); isLedgerReplicationDisabled = false; znodeLatch.countDown(); } catch (UnavailableException e) { LOG.debug("Unexpected exception while marking urLedger", e); isLedgerReplicationDisabled = false; } } }; thread1.start(); try { znodeLatch.await(5, TimeUnit.SECONDS); assertTrue("Ledger replication is not disabled!", isLedgerReplicationDisabled); assertEquals("Failed to disable ledger replication!", 2, znodeLatch .getCount()); replicaMgr.enableLedgerReplication(); znodeLatch.await(5, TimeUnit.SECONDS); LOG.debug("Enabled Ledeger Replication"); assertTrue("Ledger replication is not disabled!", !isLedgerReplicationDisabled); assertEquals("Failed to disable ledger replication!", 0, znodeLatch .getCount()); } finally { thread1.interrupt(); } } /** * Test that the hierarchy gets cleaned up as ledgers * are marked as fully replicated */ @Test(timeout=60000) public void testHierarchyCleanup() throws Exception { final LedgerUnderreplicationManager replicaMgr = lmf1 .newLedgerUnderreplicationManager(); // 4 ledgers, 2 in the same hierarchy long[] ledgers = { 0x00000000deadbeefL, 0x00000000deadbeeeL, 0x00000000beefcafeL, 0x00000000cafed00dL }; for (long l : ledgers) { replicaMgr.markLedgerUnderreplicated(l, "localhost:3181"); } // can't simply test top level as we are limited to ledger // ids no larger than an int String testPath = urLedgerPath + "/0000/0000"; List children = zkc1.getChildren(testPath, false); assertEquals("Wrong number of hierarchies", 3, children.size()); int marked = 0; while (marked < 3) { long l = replicaMgr.getLedgerToRereplicate(); if (l != ledgers[0]) { replicaMgr.markLedgerReplicated(l); marked++; } else { replicaMgr.releaseUnderreplicatedLedger(l); } } children = zkc1.getChildren(testPath, false); assertEquals("Wrong number of hierarchies", 1, children.size()); long l = replicaMgr.getLedgerToRereplicate(); assertEquals("Got wrong ledger", ledgers[0], l); replicaMgr.markLedgerReplicated(l); children = zkc1.getChildren(urLedgerPath, false); assertEquals("All hierarchies should be cleaned up", 0, children.size()); } /** * Test that as the hierarchy gets cleaned up, it doesn't interfere * with the marking of other ledgers as underreplicated */ @Test(timeout = 90000) public void testHierarchyCleanupInterference() throws Exception { final LedgerUnderreplicationManager replicaMgr1 = lmf1 .newLedgerUnderreplicationManager(); final LedgerUnderreplicationManager replicaMgr2 = lmf2 .newLedgerUnderreplicationManager(); final int iterations = 1000; final AtomicBoolean threadFailed = new AtomicBoolean(false); Thread markUnder = new Thread() { public void run() { long l = 1; try { for (int i = 0; i < iterations; i++) { replicaMgr1.markLedgerUnderreplicated(l, "localhost:3181"); l += 10000; } } catch (Exception e) { LOG.error("markUnder Thread failed with exception", e); threadFailed.set(true); return; } } }; final AtomicInteger processed = new AtomicInteger(0); Thread markRepl = new Thread() { public void run() { try { for (int i = 0; i < iterations; i++) { long l = replicaMgr2.getLedgerToRereplicate(); replicaMgr2.markLedgerReplicated(l); processed.incrementAndGet(); } } catch (Exception e) { LOG.error("markRepl Thread failed with exception", e); threadFailed.set(true); return; } } }; markRepl.setDaemon(true); markUnder.setDaemon(true); markRepl.start(); markUnder.start(); markUnder.join(); assertFalse("Thread failed to complete", threadFailed.get()); int lastProcessed = 0; while (true) { markRepl.join(10000); if (!markRepl.isAlive()) { break; } assertFalse("markRepl thread not progressing", lastProcessed == processed.get()); } assertFalse("Thread failed to complete", threadFailed.get()); List children = zkc1.getChildren(urLedgerPath, false); for (String s : children) { LOG.info("s: {}", s); } assertEquals("All hierarchies should be cleaned up", 0, children.size()); } private void verifyMarkLedgerUnderreplicated(Collection missingReplica) throws KeeperException, InterruptedException, CompatibilityException, UnavailableException { Long ledgerA = 0xfeadeefdacL; String znodeA = getUrLedgerZnode(ledgerA); LedgerUnderreplicationManager replicaMgr = lmf1 .newLedgerUnderreplicationManager(); for (String replica : missingReplica) { replicaMgr.markLedgerUnderreplicated(ledgerA, replica); } String urLedgerA = getData(znodeA); UnderreplicatedLedgerFormat.Builder builderA = UnderreplicatedLedgerFormat .newBuilder(); for (String replica : missingReplica) { builderA.addReplica(replica); } List replicaList = builderA.getReplicaList(); for (String replica : missingReplica) { assertTrue("UrLedger:" + urLedgerA + " doesn't contain failed bookie :" + replica, replicaList .contains(replica)); } } private String getData(String znode) { try { byte[] data = zkc1.getData(znode, false, null); return new String(data); } catch (KeeperException e) { LOG.error("Exception while reading data from znode :" + znode); } catch (InterruptedException e) { LOG.error("Exception while reading data from znode :" + znode); } return ""; } private String getUrLedgerZnode(long ledgerId) { return ZkLedgerUnderreplicationManager.getUrLedgerZnode(urLedgerPath, ledgerId); } private void takeLedgerAndRelease(final LedgerUnderreplicationManager m, final CountDownLatch latch, int numberOfIterations) { for (int i = 0; i < numberOfIterations; i++) { try { long ledgerToRereplicate = m.getLedgerToRereplicate(); m.releaseUnderreplicatedLedger(ledgerToRereplicate); } catch (UnavailableException e) { LOG.error("UnavailableException when " + "taking or releasing lock", e); } latch.countDown(); } } } TestReplicationWorker.java000066400000000000000000000542771244507361200371670ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/replication/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.replication; import java.net.InetAddress; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Enumeration; import java.util.Set; import java.util.Map.Entry; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.ClientUtil; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.LedgerHandleAdapter; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.meta.LedgerManagerFactory; import org.apache.bookkeeper.meta.LedgerUnderreplicationManager; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.test.MultiLedgerManagerTestCase; import org.apache.bookkeeper.util.BookKeeperConstants; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.zookeeper.ZooKeeper; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Test the ReplicationWroker, where it has to replicate the fragments from * failed Bookies to given target Bookie. */ public class TestReplicationWorker extends MultiLedgerManagerTestCase { private static final byte[] TESTPASSWD = "testpasswd".getBytes(); private static final Logger LOG = LoggerFactory .getLogger(TestReplicationWorker.class); private String basePath = ""; private LedgerManagerFactory mFactory; private LedgerUnderreplicationManager underReplicationManager; private static byte[] data = "TestReplicationWorker".getBytes(); public TestReplicationWorker(String ledgerManagerFactory) { super(3); LOG.info("Running test case using ledger manager : " + ledgerManagerFactory); // set ledger manager name baseConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); baseClientConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); basePath = baseClientConf.getZkLedgersRootPath() + '/' + BookKeeperConstants.UNDER_REPLICATION_NODE + BookKeeperConstants.DEFAULT_ZK_LEDGERS_ROOT_PATH; baseConf.setRereplicationEntryBatchSize(3); } @Override public void setUp() throws Exception { super.setUp(); // initialize urReplicationManager mFactory = LedgerManagerFactory.newLedgerManagerFactory(baseClientConf, zkc); underReplicationManager = mFactory.newLedgerUnderreplicationManager(); } @Override public void tearDown() throws Exception { super.tearDown(); if(null != mFactory){ mFactory.uninitialize(); mFactory = null; } if(null != underReplicationManager){ underReplicationManager.close(); underReplicationManager = null; } } /** * Tests that replication worker should replicate the failed bookie * fragments to target bookie given to the worker. */ @Test(timeout = 30000) public void testRWShouldReplicateFragmentsToTargetBookie() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKill); killBookie(replicaToKill); int startNewBookie = startNewBookie(); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); LOG.info("New Bookie addr :" + newBkAddr); ReplicationWorker rw = new ReplicationWorker(zkc, baseConf, newBkAddr); rw.start(); try { underReplicationManager.markLedgerUnderreplicated(lh.getId(), replicaToKill.toString()); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh .getId(), basePath)) { Thread.sleep(100); } killAllBookies(lh, newBkAddr); // Should be able to read the entries from 0-9 verifyRecoveredLedgers(lh, 0, 9); } finally { rw.shutdown(); } } /** * Tests that replication worker should retry for replication until enough * bookies available for replication */ @Test(timeout = 60000) public void testRWShouldRetryUntilThereAreEnoughBksAvailableForReplication() throws Exception { LedgerHandle lh = bkc.createLedger(1, 1, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } lh.close(); InetSocketAddress replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKill); ServerConfiguration killedBookieConfig = killBookie(replicaToKill); int startNewBookie = startNewBookie(); InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); LOG.info("New Bookie addr :" + newBkAddr); killAllBookies(lh, newBkAddr); ReplicationWorker rw = new ReplicationWorker(zkc, baseConf, newBkAddr); rw.start(); try { underReplicationManager.markLedgerUnderreplicated(lh.getId(), replicaToKill.toString()); int counter = 100; while (counter-- > 0) { assertTrue("Expecting that replication should not complete", ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh .getId(), basePath)); Thread.sleep(100); } // restart killed bookie bs.add(startBookie(killedBookieConfig)); bsConfs.add(killedBookieConfig); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh .getId(), basePath)) { Thread.sleep(100); } // Should be able to read the entries from 0-9 verifyRecoveredLedgers(lh, 0, 9); } finally { rw.shutdown(); } } /** * Tests that replication worker1 should take one fragment replication and * other replication worker also should compete for the replication. */ @Test(timeout = 90000) public void test2RWsShouldCompeteForReplicationOf2FragmentsAndCompleteReplication() throws Exception { LedgerHandle lh = bkc.createLedger(2, 2, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } lh.close(); InetSocketAddress replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKill); ServerConfiguration killedBookieConfig = killBookie(replicaToKill); killAllBookies(lh, null); // Starte RW1 int startNewBookie1 = startNewBookie(); InetSocketAddress newBkAddr1 = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie1); LOG.info("New Bookie addr :" + newBkAddr1); ReplicationWorker rw1 = new ReplicationWorker(zkc, baseConf, newBkAddr1); // Starte RW2 int startNewBookie2 = startNewBookie(); InetSocketAddress newBkAddr2 = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie2); LOG.info("New Bookie addr :" + newBkAddr2); ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(10000); ZooKeeper zkc1 = ZkUtils.createConnectedZookeeperClient( zkUtil.getZooKeeperConnectString(), w); ReplicationWorker rw2 = new ReplicationWorker(zkc1, baseConf, newBkAddr2); rw1.start(); rw2.start(); try { underReplicationManager.markLedgerUnderreplicated(lh.getId(), replicaToKill.toString()); int counter = 10; while (counter-- > 0) { assertTrue("Expecting that replication should not complete", ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh .getId(), basePath)); Thread.sleep(100); } // restart killed bookie bs.add(startBookie(killedBookieConfig)); bsConfs.add(killedBookieConfig); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh .getId(), basePath)) { Thread.sleep(100); } // Should be able to read the entries from 0-9 verifyRecoveredLedgers(lh, 0, 9); } finally { rw1.shutdown(); rw2.shutdown(); zkc1.close(); } } /** * Tests that Replication worker should clean the leadger under replication * node of the ledger already deleted */ @Test(timeout = 3000) public void testRWShouldCleanTheLedgerFromUnderReplicationIfLedgerAlreadyDeleted() throws Exception { LedgerHandle lh = bkc.createLedger(2, 2, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } lh.close(); InetSocketAddress replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKill); killBookie(replicaToKill); int startNewBookie = startNewBookie(); InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); LOG.info("New Bookie addr :" + newBkAddr); ReplicationWorker rw = new ReplicationWorker(zkc, baseConf, newBkAddr); rw.start(); try { bkc.deleteLedger(lh.getId()); // Deleting the ledger // Also mark ledger as in UnderReplication underReplicationManager.markLedgerUnderreplicated(lh.getId(), replicaToKill.toString()); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh .getId(), basePath)) { Thread.sleep(100); } } finally { rw.shutdown(); } } @Test(timeout = 60000) public void testMultipleLedgerReplicationWithReplicationWorker() throws Exception { // Ledger1 LedgerHandle lh1 = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh1.addEntry(data); } InetSocketAddress replicaToKillFromFirstLedger = LedgerHandleAdapter .getLedgerMetadata(lh1).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKillFromFirstLedger); // Ledger2 LedgerHandle lh2 = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh2.addEntry(data); } InetSocketAddress replicaToKillFromSecondLedger = LedgerHandleAdapter .getLedgerMetadata(lh2).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKillFromSecondLedger); // Kill ledger1 killBookie(replicaToKillFromFirstLedger); lh1.close(); // Kill ledger2 killBookie(replicaToKillFromFirstLedger); lh2.close(); int startNewBookie = startNewBookie(); InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); LOG.info("New Bookie addr :" + newBkAddr); ReplicationWorker rw = new ReplicationWorker(zkc, baseConf, newBkAddr); rw.start(); try { // Mark ledger1 and 2 as underreplicated underReplicationManager.markLedgerUnderreplicated(lh1.getId(), replicaToKillFromFirstLedger.toString()); underReplicationManager.markLedgerUnderreplicated(lh2.getId(), replicaToKillFromSecondLedger.toString()); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh1 .getId(), basePath)) { Thread.sleep(100); } while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh2 .getId(), basePath)) { Thread.sleep(100); } killAllBookies(lh1, newBkAddr); // Should be able to read the entries from 0-9 verifyRecoveredLedgers(lh1, 0, 9); verifyRecoveredLedgers(lh2, 0, 9); } finally { rw.shutdown(); } } /** * Tests that ReplicationWorker should fence the ledger and release ledger * lock after timeout. Then replication should happen normally. */ @Test(timeout = 60000) public void testRWShouldReplicateTheLedgersAfterTimeoutIfLastFragmentIsUR() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKill); killBookie(replicaToKill); int startNewBookie = startNewBookie(); InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); LOG.info("New Bookie addr :" + newBkAddr); ReplicationWorker rw = new ReplicationWorker(zkc, baseConf, newBkAddr); LedgerManagerFactory mFactory = LedgerManagerFactory .newLedgerManagerFactory(baseClientConf, zkc); LedgerUnderreplicationManager underReplicationManager = mFactory .newLedgerUnderreplicationManager(); rw.start(); try { underReplicationManager.markLedgerUnderreplicated(lh.getId(), replicaToKill.toString()); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh .getId(), basePath)) { Thread.sleep(100); } killAllBookies(lh, newBkAddr); // Should be able to read the entries from 0-9 verifyRecoveredLedgers(lh, 0, 9); lh = bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TESTPASSWD); assertFalse("Ledger must have been closed by RW", ClientUtil .isLedgerOpen(lh)); } finally { rw.shutdown(); underReplicationManager.close(); } } /** * Tests that ReplicationWorker should not have identified for postponing * the replication if ledger is in open state and lastFragment is not in * underReplication state. Note that RW should not fence such ledgers. */ @Test(timeout = 30000) public void testRWShouldReplicateTheLedgersAfterTimeoutIfLastFragmentIsNotUR() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress replicaToKill = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKill); killBookie(replicaToKill); int startNewBookie = startNewBookie(); // Reform ensemble...Making sure that last fragment is not in // under-replication for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress .getLocalHost().getHostAddress(), startNewBookie); LOG.info("New Bookie addr :" + newBkAddr); ReplicationWorker rw = new ReplicationWorker(zkc, baseConf, newBkAddr); LedgerManagerFactory mFactory = LedgerManagerFactory .newLedgerManagerFactory(baseClientConf, zkc); LedgerUnderreplicationManager underReplicationManager = mFactory .newLedgerUnderreplicationManager(); rw.start(); try { underReplicationManager.markLedgerUnderreplicated(lh.getId(), replicaToKill.toString()); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh .getId(), basePath)) { Thread.sleep(100); } killAllBookies(lh, newBkAddr); // Should be able to read the entries from 0-9 verifyRecoveredLedgers(lh, 0, 9); lh = bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TESTPASSWD); // Ledger should be still in open state assertTrue("Ledger must have been closed by RW", ClientUtil .isLedgerOpen(lh)); } finally { rw.shutdown(); underReplicationManager.close(); } } /** * Test that if the local bookie turns out to be readonly, then no point in running RW. So RW should shutdown. */ @Test(timeout = 20000) public void testRWShutdownOnLocalBookieReadonlyTransition() throws Exception { LedgerHandle lh = bkc.createLedger(3, 3, BookKeeper.DigestType.CRC32, TESTPASSWD); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress replicaToKill = LedgerHandleAdapter.getLedgerMetadata(lh).getEnsembles().get(0L).get(0); LOG.info("Killing Bookie", replicaToKill); killBookie(replicaToKill); int newBkPort = startNewBookie(); for (int i = 0; i < 10; i++) { lh.addEntry(data); } InetSocketAddress newBkAddr = new InetSocketAddress(InetAddress.getLocalHost().getHostAddress(), newBkPort); LOG.info("New Bookie addr :" + newBkAddr); ReplicationWorker rw = new ReplicationWorker(zkc, baseConf, newBkAddr); rw.start(); try { BookieServer newBk = bs.get(bs.size() - 1); bsConfs.get(bsConfs.size() - 1).setReadOnlyModeEnabled(true); newBk.getBookie().transitionToReadOnlyMode(); underReplicationManager.markLedgerUnderreplicated(lh.getId(), replicaToKill.toString()); while (ReplicationTestUtil.isLedgerInUnderReplication(zkc, lh.getId(), basePath) && rw.isRunning()) { Thread.sleep(100); } assertFalse("RW should shutdown if the bookie is readonly", rw.isRunning()); } finally { rw.shutdown(); } } /** * Test that the replication worker will shutdown if it lose its zookeeper session */ @Test(timeout=30000) public void testRWZKSessionLost() throws Exception { ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(10000); ZooKeeper zk = ZkUtils.createConnectedZookeeperClient( zkUtil.getZooKeeperConnectString(), w); try { ReplicationWorker rw = new ReplicationWorker(zk, baseConf, getBookie(0)); rw.start(); for (int i = 0; i < 10; i++) { if (rw.isRunning()) { break; } Thread.sleep(1000); } assertTrue("Replication worker should be running", rw.isRunning()); stopZKCluster(); for (int i = 0; i < 10; i++) { if (!rw.isRunning()) { break; } Thread.sleep(1000); } assertFalse("Replication worker should have shut down", rw.isRunning()); } finally { zk.close(); } } private void killAllBookies(LedgerHandle lh, InetSocketAddress excludeBK) throws Exception { // Killing all bookies except newly replicated bookie Set>> entrySet = LedgerHandleAdapter .getLedgerMetadata(lh).getEnsembles().entrySet(); for (Entry> entry : entrySet) { ArrayList bookies = entry.getValue(); for (InetSocketAddress bookie : bookies) { if (bookie.equals(excludeBK)) { continue; } killBookie(bookie); } } } private void verifyRecoveredLedgers(LedgerHandle lh, long startEntryId, long endEntryId) throws BKException, InterruptedException { LedgerHandle lhs = bkc.openLedgerNoRecovery(lh.getId(), BookKeeper.DigestType.CRC32, TESTPASSWD); Enumeration entries = lhs.readEntries(startEntryId, endEntryId); assertTrue("Should have the elements", entries.hasMoreElements()); while (entries.hasMoreElements()) { LedgerEntry entry = entries.nextElement(); assertEquals("TestReplicationWorker", new String(entry.getEntry())); } } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/000077500000000000000000000000001244507361200305275ustar00rootroot00000000000000AsyncLedgerOpsTest.java000066400000000000000000000200101244507361200350260ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/testpackage org.apache.bookkeeper.test; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Enumeration; import java.util.Random; import java.util.Set; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.AsyncCallback.CloseCallback; import org.apache.bookkeeper.client.AsyncCallback.CreateCallback; import org.apache.bookkeeper.client.AsyncCallback.OpenCallback; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.AsyncCallback.ReadCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.BKException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.Before; import org.junit.Test; /** * This test tests read and write, synchronous and asynchronous, strings and * integers for a BookKeeper client. The test deployment uses a ZooKeeper server * and three BookKeepers. * */ public class AsyncLedgerOpsTest extends MultiLedgerManagerMultiDigestTestCase implements AddCallback, ReadCallback, CreateCallback, CloseCallback, OpenCallback { static Logger LOG = LoggerFactory.getLogger(AsyncLedgerOpsTest.class); DigestType digestType; public AsyncLedgerOpsTest(String ledgerManagerFactory, DigestType digestType) { super(3); this.digestType = digestType; // set ledger manager type baseConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); baseClientConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); } byte[] ledgerPassword = "aaa".getBytes(); LedgerHandle lh, lh2; long ledgerId; Enumeration ls; // test related variables int numEntriesToWrite = 20; int maxInt = 2147483647; Random rng; // Random Number Generator ArrayList entries; // generated entries ArrayList entriesSize; // Synchronization SyncObj sync; Set syncObjs; class SyncObj { int counter; boolean value; public SyncObj() { counter = 0; value = false; } } class ControlObj { LedgerHandle lh; void setLh(LedgerHandle lh) { this.lh = lh; } LedgerHandle getLh() { return lh; } } @Test(timeout=60000) public void testAsyncCreateClose() throws IOException, BKException { try { ControlObj ctx = new ControlObj(); synchronized (ctx) { LOG.info("Going to create ledger asynchronously"); bkc.asyncCreateLedger(3, 2, digestType, ledgerPassword, this, ctx); ctx.wait(); } // bkc.initMessageDigest("SHA1"); LedgerHandle lh = ctx.getLh(); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.asyncAddEntry(entry.array(), this, sync); } // wait for all entries to be acknowledged synchronized (sync) { while (sync.counter < numEntriesToWrite) { LOG.debug("Entries counter = " + sync.counter); sync.wait(); } } LOG.info("*** WRITE COMPLETE ***"); // close ledger synchronized (ctx) { lh.asyncClose(this, ctx); ctx.wait(); } // *** WRITING PART COMPLETE // READ PART BEGINS *** // open ledger synchronized (ctx) { bkc.asyncOpenLedger(ledgerId, digestType, ledgerPassword, this, ctx); ctx.wait(); } lh = ctx.getLh(); LOG.debug("Number of entries written: " + lh.getLastAddConfirmed()); assertTrue("Verifying number of entries written", lh.getLastAddConfirmed() == (numEntriesToWrite - 1)); // read entries lh.asyncReadEntries(0, numEntriesToWrite - 1, this, sync); synchronized (sync) { while (sync.value == false) { sync.wait(); } } LOG.debug("*** READ COMPLETE ***"); // at this point, Enumeration ls is filled with the returned // values int i = 0; while (ls.hasMoreElements()) { ByteBuffer origbb = ByteBuffer.wrap(entries.get(i)); Integer origEntry = origbb.getInt(); byte[] entry = ls.nextElement().getEntry(); ByteBuffer result = ByteBuffer.wrap(entry); LOG.debug("Length of result: " + result.capacity()); LOG.debug("Original entry: " + origEntry); Integer retrEntry = result.getInt(); LOG.debug("Retrieved entry: " + retrEntry); assertTrue("Checking entry " + i + " for equality", origEntry.equals(retrEntry)); assertTrue("Checking entry " + i + " for size", entry.length == entriesSize.get(i).intValue()); i++; } assertTrue("Checking number of read entries", i == numEntriesToWrite); lh.close(); } catch (InterruptedException e) { LOG.error("Interrupted", e); fail("InterruptedException"); } // catch (NoSuchAlgorithmException e) { // e.printStackTrace(); // } } @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { SyncObj x = (SyncObj) ctx; synchronized (x) { x.counter++; x.notify(); } } @Override public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { ls = seq; synchronized (sync) { sync.value = true; sync.notify(); } } @Override public void createComplete(int rc, LedgerHandle lh, Object ctx) { synchronized (ctx) { ControlObj cobj = (ControlObj) ctx; cobj.setLh(lh); cobj.notify(); } } @Override public void openComplete(int rc, LedgerHandle lh, Object ctx) { synchronized (ctx) { ControlObj cobj = (ControlObj) ctx; cobj.setLh(lh); cobj.notify(); } } @Override public void closeComplete(int rc, LedgerHandle lh, Object ctx) { synchronized (ctx) { ControlObj cobj = (ControlObj) ctx; cobj.notify(); } } @Before @Override public void setUp() throws Exception { super.setUp(); rng = new Random(System.currentTimeMillis()); // Initialize the Random // Number Generator entries = new ArrayList(); // initialize the entries list entriesSize = new ArrayList(); sync = new SyncObj(); // initialize the synchronization data structure } } BaseTestCase.java000066400000000000000000000030361244507361200336230ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.util.Arrays; import java.util.Collection; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(Parameterized.class) public abstract class BaseTestCase extends BookKeeperClusterTestCase { static final Logger LOG = LoggerFactory.getLogger(BaseTestCase.class); public BaseTestCase(int numBookies) { super(numBookies); } @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { {DigestType.MAC }, {DigestType.CRC32}}); } } BookKeeperClusterTestCase.java000066400000000000000000000444151244507361200363470ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.io.File; import java.io.IOException; import java.net.InetAddress; import java.net.InetSocketAddress; import java.util.HashMap; import java.util.LinkedList; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import junit.framework.TestCase; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.client.BookKeeperTestClient; import org.apache.bookkeeper.conf.AbstractConfiguration; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.metastore.InMemoryMetaStore; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.replication.AutoRecoveryMain; import org.apache.bookkeeper.replication.Auditor; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.commons.io.FileUtils; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; import org.junit.After; import org.junit.Before; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * A class runs several bookie servers for testing. */ public abstract class BookKeeperClusterTestCase extends TestCase { static final Logger LOG = LoggerFactory.getLogger(BookKeeperClusterTestCase.class); // ZooKeeper related variables protected ZooKeeperUtil zkUtil = new ZooKeeperUtil(); protected ZooKeeper zkc; // BookKeeper related variables protected List tmpDirs = new LinkedList(); protected List bs = new LinkedList(); protected List bsConfs = new LinkedList(); protected int numBookies; protected BookKeeperTestClient bkc; protected ServerConfiguration baseConf = new ServerConfiguration(); protected ClientConfiguration baseClientConf = new ClientConfiguration(); private Map autoRecoveryProcesses = new HashMap(); private boolean isAutoRecoveryEnabled; public BookKeeperClusterTestCase(int numBookies) { this.numBookies = numBookies; } @Before @Override public void setUp() throws Exception { LOG.info("Setting up test {}", getName()); InMemoryMetaStore.reset(); setMetastoreImplClass(baseConf); setMetastoreImplClass(baseClientConf); try { // start zookeeper service startZKCluster(); // start bookkeeper service startBKCluster(); } catch (Exception e) { LOG.error("Error setting up", e); throw e; } } @After @Override public void tearDown() throws Exception { LOG.info("TearDown"); // stop bookkeeper service stopBKCluster(); // stop zookeeper service stopZKCluster(); LOG.info("Tearing down test {}", getName()); } /** * Start zookeeper cluster * * @throws Exception */ protected void startZKCluster() throws Exception { zkUtil.startServer(); zkc = zkUtil.getZooKeeperClient(); } /** * Stop zookeeper cluster * * @throws Exception */ protected void stopZKCluster() throws Exception { zkUtil.killServer(); } /** * Start cluster. Also, starts the auto recovery process for each bookie, if * isAutoRecoveryEnabled is true. * * @throws Exception */ protected void startBKCluster() throws Exception { baseClientConf.setZkServers(zkUtil.getZooKeeperConnectString()); if (numBookies > 0) { bkc = new BookKeeperTestClient(baseClientConf); } // Create Bookie Servers (B1, B2, B3) for (int i = 0; i < numBookies; i++) { startNewBookie(); } } /** * Stop cluster. Also, stops all the auto recovery processes for the bookie * cluster, if isAutoRecoveryEnabled is true. * * @throws Exception */ protected void stopBKCluster() throws Exception { if (bkc != null) { bkc.close();; } for (BookieServer server : bs) { server.shutdown(); AutoRecoveryMain autoRecovery = autoRecoveryProcesses.get(server); if (autoRecovery != null && isAutoRecoveryEnabled()) { autoRecovery.shutdown(); LOG.debug("Shutdown auto recovery for bookieserver:" + server.getLocalAddress()); } } bs.clear(); for (File f : tmpDirs) { FileUtils.deleteDirectory(f); } } protected ServerConfiguration newServerConfiguration() throws Exception { File f = File.createTempFile("bookie", "test"); tmpDirs.add(f); f.delete(); f.mkdir(); int port = PortManager.nextFreePort(); return newServerConfiguration(port, zkUtil.getZooKeeperConnectString(), f, new File[] { f }); } protected ServerConfiguration newServerConfiguration(int port, String zkServers, File journalDir, File[] ledgerDirs) { ServerConfiguration conf = new ServerConfiguration(baseConf); conf.setBookiePort(port); conf.setZkServers(zkServers); conf.setJournalDirName(journalDir.getPath()); conf.setAllowLoopback(true); String[] ledgerDirNames = new String[ledgerDirs.length]; for (int i=0; i= bs.size()) { throw new IOException("Bookie does not exist"); } BookieServer server = bs.get(index); server.shutdown(); stopAutoRecoveryService(server); bs.remove(server); return bsConfs.remove(index); } /** * Sleep a bookie * * @param addr * Socket Address * @param seconds * Sleep seconds * @return Count Down latch which will be counted down when sleep finishes * @throws InterruptedException * @throws IOException */ public CountDownLatch sleepBookie(InetSocketAddress addr, final int seconds) throws Exception { for (final BookieServer bookie : bs) { if (bookie.getLocalAddress().equals(addr)) { final CountDownLatch l = new CountDownLatch(1); Thread sleeper = new Thread() { @Override public void run() { try { bookie.suspendProcessing(); l.countDown(); Thread.sleep(seconds*1000); bookie.resumeProcessing(); } catch (Exception e) { LOG.error("Error suspending bookie", e); } } }; sleeper.start(); return l; } } throw new IOException("Bookie not found"); } /** * Sleep a bookie until I count down the latch * * @param addr * Socket Address * @param latch * Latch to wait on * @throws InterruptedException * @throws IOException */ public void sleepBookie(InetSocketAddress addr, final CountDownLatch l) throws Exception { for (final BookieServer bookie : bs) { if (bookie.getLocalAddress().equals(addr)) { Thread sleeper = new Thread() { public void run() { try { bookie.suspendProcessing(); l.await(); bookie.resumeProcessing(); } catch (Exception e) { LOG.error("Error suspending bookie", e); } } }; sleeper.start(); return; } } throw new IOException("Bookie not found"); } /** * Restart bookie servers. Also restarts all the respective auto recovery * process, if isAutoRecoveryEnabled is true. * * @throws InterruptedException * @throws IOException * @throws KeeperException * @throws BookieException */ public void restartBookies() throws Exception { restartBookies(null); } /** * Restart bookie servers using new configuration settings. Also restart the * respective auto recovery process, if isAutoRecoveryEnabled is true. * * @param newConf * New Configuration Settings * @throws InterruptedException * @throws IOException * @throws KeeperException * @throws BookieException */ public void restartBookies(ServerConfiguration newConf) throws Exception { // shut down bookie server for (BookieServer server : bs) { server.shutdown(); stopAutoRecoveryService(server); } bs.clear(); Thread.sleep(1000); // restart them to ensure we can't int j = 0; for (ServerConfiguration conf : bsConfs) { if (null != newConf) { conf.loadConf(newConf); } bs.add(startBookie(conf)); j++; } } /** * Helper method to startup a new bookie server with the indicated port * number. Also, starts the auto recovery process, if the * isAutoRecoveryEnabled is set true. * * @param port * Port to start the new bookie server on * @throws IOException */ public int startNewBookie() throws Exception { ServerConfiguration conf = newServerConfiguration(); bsConfs.add(conf); bs.add(startBookie(conf)); return conf.getBookiePort(); } /** * Helper method to startup a bookie server using a configuration object. * Also, starts the auto recovery process if isAutoRecoveryEnabled is true. * * @param conf * Server Configuration Object * */ protected BookieServer startBookie(ServerConfiguration conf) throws Exception { BookieServer server = new BookieServer(conf); server.start(); int port = conf.getBookiePort(); while(bkc.getZkHandle().exists("/ledgers/available/" + InetAddress.getLocalHost().getHostAddress() + ":" + port, false) == null) { Thread.sleep(500); } bkc.readBookiesBlocking(); LOG.info("New bookie on port " + port + " has been created."); try { startAutoRecovery(server, conf); } catch (CompatibilityException ce) { LOG.error("Exception while starting AutoRecovery!", ce); } catch (UnavailableException ue) { LOG.error("Exception while starting AutoRecovery!", ue); } return server; } /** * Start a bookie with the given bookie instance. Also, starts the auto * recovery for this bookie, if isAutoRecoveryEnabled is true. */ protected BookieServer startBookie(ServerConfiguration conf, final Bookie b) throws Exception { BookieServer server = new BookieServer(conf) { @Override protected Bookie newBookie(ServerConfiguration conf) { return b; } }; server.start(); int port = conf.getBookiePort(); while(bkc.getZkHandle().exists("/ledgers/available/" + InetAddress.getLocalHost().getHostAddress() + ":" + port, false) == null) { Thread.sleep(500); } bkc.readBookiesBlocking(); LOG.info("New bookie on port " + port + " has been created."); try { startAutoRecovery(server, conf); } catch (CompatibilityException ce) { LOG.error("Exception while starting AutoRecovery!", ce); } catch (UnavailableException ue) { LOG.error("Exception while starting AutoRecovery!", ue); } return server; } public void setMetastoreImplClass(AbstractConfiguration conf) { conf.setMetastoreImplClass(InMemoryMetaStore.class.getName()); } /** * Flags used to enable/disable the auto recovery process. If it is enabled, * starting the bookie server will starts the auto recovery process for that * bookie. Also, stopping bookie will stops the respective auto recovery * process. * * @param isAutoRecoveryEnabled * Value true will enable the auto recovery process. Value false * will disable the auto recovery process */ public void setAutoRecoveryEnabled(boolean isAutoRecoveryEnabled) { this.isAutoRecoveryEnabled = isAutoRecoveryEnabled; } /** * Flag used to check whether auto recovery process is enabled/disabled. By * default the flag is false. * * @return true, if the auto recovery is enabled. Otherwise return false. */ public boolean isAutoRecoveryEnabled() { return isAutoRecoveryEnabled; } private void startAutoRecovery(BookieServer bserver, ServerConfiguration conf) throws Exception { if (isAutoRecoveryEnabled()) { AutoRecoveryMain autoRecoveryProcess = new AutoRecoveryMain(conf); autoRecoveryProcess.start(); autoRecoveryProcesses.put(bserver, autoRecoveryProcess); LOG.debug("Starting Auditor Recovery for the bookie:" + bserver.getLocalAddress()); } } private void stopAutoRecoveryService(BookieServer toRemove) throws Exception { AutoRecoveryMain autoRecoveryMain = autoRecoveryProcesses .remove(toRemove); if (null != autoRecoveryMain && isAutoRecoveryEnabled()) { autoRecoveryMain.shutdown(); LOG.debug("Shutdown auto recovery for bookieserver:" + toRemove.getLocalAddress()); } } /** * Will starts the auto recovery process for the bookie servers. One auto * recovery process per each bookie server, if isAutoRecoveryEnabled is * enabled. */ public void startReplicationService() throws Exception { int index = -1; for (BookieServer bserver : bs) { startAutoRecovery(bserver, bsConfs.get(++index)); } } /** * Will stops all the auto recovery processes for the bookie cluster, if * isAutoRecoveryEnabled is true. */ public void stopReplicationService() throws Exception { if(false == isAutoRecoveryEnabled()){ return; } for (Entry autoRecoveryProcess : autoRecoveryProcesses .entrySet()) { autoRecoveryProcess.getValue().shutdown(); LOG.debug("Shutdown Auditor Recovery for the bookie:" + autoRecoveryProcess.getKey().getLocalAddress()); } } public Auditor getAuditor(int timeout, TimeUnit unit) throws Exception { final long timeoutAt = System.nanoTime() + TimeUnit.NANOSECONDS.convert(timeout, unit); while (System.nanoTime() < timeoutAt) { for (AutoRecoveryMain p : autoRecoveryProcesses.values()) { Auditor a = p.getAuditor(); if (a != null) { return a; } } Thread.sleep(100); } throw new Exception("No auditor found"); } } BookieClientTest.java000066400000000000000000000213161244507361200345250ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/testpackage org.apache.bookkeeper.test; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.File; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.Arrays; import java.util.concurrent.Executors; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBuffers; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; import org.junit.Test; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.proto.BookieClient; import org.apache.bookkeeper.proto.BookieProtocol; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.ReadEntryCallback; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import junit.framework.TestCase; public class BookieClientTest extends TestCase { static Logger LOG = LoggerFactory.getLogger(BookieClientTest.class); BookieServer bs; File tmpDir; public int port = 13645; public ClientSocketChannelFactory channelFactory; public OrderedSafeExecutor executor; ServerConfiguration conf; @Override public void setUp() throws Exception { tmpDir = File.createTempFile("bookie", "test"); tmpDir.delete(); tmpDir.mkdir(); // Since this test does not rely on the BookKeeper client needing to // know via ZooKeeper which Bookies are available, okay, so pass in null // for the zkServers input parameter when constructing the BookieServer. ServerConfiguration conf = new ServerConfiguration(); conf.setZkServers(null).setBookiePort(port) .setJournalDirName(tmpDir.getPath()) .setAllowLoopback(true) .setLedgerDirNames(new String[] { tmpDir.getPath() }); bs = new BookieServer(conf); bs.start(); channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors .newCachedThreadPool()); executor = new OrderedSafeExecutor(2); } @Override public void tearDown() throws Exception { bs.shutdown(); recursiveDelete(tmpDir); channelFactory.releaseExternalResources(); executor.shutdown(); } private static void recursiveDelete(File dir) { File children[] = dir.listFiles(); if (children != null) { for (File child : children) { recursiveDelete(child); } } dir.delete(); } static class ResultStruct { int rc; ByteBuffer entry; } ReadEntryCallback recb = new ReadEntryCallback() { public void readEntryComplete(int rc, long ledgerId, long entryId, ChannelBuffer bb, Object ctx) { ResultStruct rs = (ResultStruct) ctx; synchronized (rs) { rs.rc = rc; if (bb != null) { bb.readerIndex(16); rs.entry = bb.toByteBuffer(); rs.notifyAll(); } } } }; WriteCallback wrcb = new WriteCallback() { public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { if (ctx != null) { synchronized (ctx) { ctx.notifyAll(); } } } }; @Test(timeout=60000) public void testWriteGaps() throws Exception { final Object notifyObject = new Object(); byte[] passwd = new byte[20]; Arrays.fill(passwd, (byte) 'a'); InetSocketAddress addr = new InetSocketAddress("127.0.0.1", port); ResultStruct arc = new ResultStruct(); BookieClient bc = new BookieClient(new ClientConfiguration(), channelFactory, executor); ChannelBuffer bb; bb = createByteBuffer(1, 1, 1); bc.addEntry(addr, 1, passwd, 1, bb, wrcb, arc, BookieProtocol.FLAG_NONE); synchronized (arc) { arc.wait(1000); bc.readEntry(addr, 1, 1, recb, arc); arc.wait(1000); assertEquals(0, arc.rc); assertEquals(1, arc.entry.getInt()); } bb = createByteBuffer(2, 1, 2); bc.addEntry(addr, 1, passwd, 2, bb, wrcb, null, BookieProtocol.FLAG_NONE); bb = createByteBuffer(3, 1, 3); bc.addEntry(addr, 1, passwd, 3, bb, wrcb, null, BookieProtocol.FLAG_NONE); bb = createByteBuffer(5, 1, 5); bc.addEntry(addr, 1, passwd, 5, bb, wrcb, null, BookieProtocol.FLAG_NONE); bb = createByteBuffer(7, 1, 7); bc.addEntry(addr, 1, passwd, 7, bb, wrcb, null, BookieProtocol.FLAG_NONE); synchronized (notifyObject) { bb = createByteBuffer(11, 1, 11); bc.addEntry(addr, 1, passwd, 11, bb, wrcb, notifyObject, BookieProtocol.FLAG_NONE); notifyObject.wait(); } synchronized (arc) { bc.readEntry(addr, 1, 6, recb, arc); arc.wait(1000); assertEquals(BKException.Code.NoSuchEntryException, arc.rc); } synchronized (arc) { bc.readEntry(addr, 1, 7, recb, arc); arc.wait(1000); assertEquals(0, arc.rc); assertEquals(7, arc.entry.getInt()); } synchronized (arc) { bc.readEntry(addr, 1, 1, recb, arc); arc.wait(1000); assertEquals(0, arc.rc); assertEquals(1, arc.entry.getInt()); } synchronized (arc) { bc.readEntry(addr, 1, 2, recb, arc); arc.wait(1000); assertEquals(0, arc.rc); assertEquals(2, arc.entry.getInt()); } synchronized (arc) { bc.readEntry(addr, 1, 3, recb, arc); arc.wait(1000); assertEquals(0, arc.rc); assertEquals(3, arc.entry.getInt()); } synchronized (arc) { bc.readEntry(addr, 1, 4, recb, arc); arc.wait(1000); assertEquals(BKException.Code.NoSuchEntryException, arc.rc); } synchronized (arc) { bc.readEntry(addr, 1, 11, recb, arc); arc.wait(1000); assertEquals(0, arc.rc); assertEquals(11, arc.entry.getInt()); } synchronized (arc) { bc.readEntry(addr, 1, 5, recb, arc); arc.wait(1000); assertEquals(0, arc.rc); assertEquals(5, arc.entry.getInt()); } synchronized (arc) { bc.readEntry(addr, 1, 10, recb, arc); arc.wait(1000); assertEquals(BKException.Code.NoSuchEntryException, arc.rc); } synchronized (arc) { bc.readEntry(addr, 1, 12, recb, arc); arc.wait(1000); assertEquals(BKException.Code.NoSuchEntryException, arc.rc); } synchronized (arc) { bc.readEntry(addr, 1, 13, recb, arc); arc.wait(1000); assertEquals(BKException.Code.NoSuchEntryException, arc.rc); } } private ChannelBuffer createByteBuffer(int i, long lid, long eid) { ByteBuffer bb; bb = ByteBuffer.allocate(4 + 16); bb.putLong(lid); bb.putLong(eid); bb.putInt(i); bb.flip(); return ChannelBuffers.wrappedBuffer(bb); } @Test(timeout=60000) public void testNoLedger() throws Exception { ResultStruct arc = new ResultStruct(); InetSocketAddress addr = new InetSocketAddress("127.0.0.1", port); BookieClient bc = new BookieClient(new ClientConfiguration(), channelFactory, executor); synchronized (arc) { bc.readEntry(addr, 2, 13, recb, arc); arc.wait(1000); assertEquals(BKException.Code.NoSuchEntryException, arc.rc); } } } BookieFailureTest.java000066400000000000000000000325711244507361200347030ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/testpackage org.apache.bookkeeper.test; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Enumeration; import java.util.Random; import java.util.Set; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeperTestClient; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.AsyncCallback.ReadCallback; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.proto.BookieServer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.KeeperException; import org.junit.Before; import org.junit.Test; /** * This test tests read and write, synchronous and asynchronous, strings and * integers for a BookKeeper client. The test deployment uses a ZooKeeper server * and three BookKeepers. * */ public class BookieFailureTest extends MultiLedgerManagerMultiDigestTestCase implements AddCallback, ReadCallback { // Depending on the taste, select the amount of logging // by decommenting one of the two lines below // static Logger LOG = Logger.getRootLogger(); static Logger LOG = LoggerFactory.getLogger(BookieFailureTest.class); byte[] ledgerPassword = "aaa".getBytes(); LedgerHandle lh, lh2; long ledgerId; // test related variables int numEntriesToWrite = 200; int maxInt = 2147483647; Random rng; // Random Number Generator ArrayList entries; // generated entries ArrayList entriesSize; DigestType digestType; class SyncObj { int counter; boolean value; boolean failureOccurred; Enumeration ls; public SyncObj() { counter = 0; value = false; failureOccurred = false; ls = null; } } public BookieFailureTest(String ledgerManagerFactory, DigestType digestType) { super(4); this.digestType = digestType; // set ledger manager baseConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); baseClientConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); } /** * Tests writes and reads when a bookie fails. * * @throws {@link IOException} */ @Test(timeout=60000) public void testAsyncBK1() throws IOException { LOG.info("#### BK1 ####"); auxTestReadWriteAsyncSingleClient(bs.get(0)); } @Test(timeout=60000) public void testAsyncBK2() throws IOException { LOG.info("#### BK2 ####"); auxTestReadWriteAsyncSingleClient(bs.get(1)); } @Test(timeout=60000) public void testAsyncBK3() throws IOException { LOG.info("#### BK3 ####"); auxTestReadWriteAsyncSingleClient(bs.get(2)); } @Test(timeout=60000) public void testAsyncBK4() throws IOException { LOG.info("#### BK4 ####"); auxTestReadWriteAsyncSingleClient(bs.get(3)); } @Test(timeout=60000) public void testBookieRecovery() throws Exception { //Shutdown all but 1 bookie bs.get(0).shutdown(); bs.get(1).shutdown(); bs.get(2).shutdown(); byte[] passwd = "blah".getBytes(); LedgerHandle lh = bkc.createLedger(1, 1,digestType, passwd); int numEntries = 100; for (int i=0; i< numEntries; i++) { byte[] data = (""+i).getBytes(); lh.addEntry(data); } bs.get(3).shutdown(); BookieServer server = new BookieServer(bsConfs.get(3)); server.start(); bs.set(3, server); assertEquals(numEntries - 1 , lh.getLastAddConfirmed()); Enumeration entries = lh.readEntries(0, lh.getLastAddConfirmed()); int numScanned = 0; while (entries.hasMoreElements()) { assertEquals((""+numScanned), new String(entries.nextElement().getEntry())); numScanned++; } assertEquals(numEntries, numScanned); } void auxTestReadWriteAsyncSingleClient(BookieServer bs) throws IOException { SyncObj sync = new SyncObj(); try { // Create a ledger lh = bkc.createLedger(3, 2, digestType, ledgerPassword); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.asyncAddEntry(entry.array(), this, sync); } LOG.info("Wrote " + numEntriesToWrite + " and now going to fail bookie."); // Bookie fail bs.shutdown(); // wait for all entries to be acknowledged synchronized (sync) { while (sync.counter < numEntriesToWrite) { LOG.debug("Entries counter = " + sync.counter); sync.wait(10000); assertFalse("Failure occurred during write", sync.failureOccurred); } } LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); // *** WRITING PART COMPLETE // READ PART BEGINS *** // open ledger bkc.close(); bkc = new BookKeeperTestClient(baseClientConf); lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); LOG.debug("Number of entries written: " + (lh.getLastAddConfirmed() + 1)); assertTrue("Verifying number of entries written", lh.getLastAddConfirmed() == (numEntriesToWrite - 1)); // read entries lh.asyncReadEntries(0, numEntriesToWrite - 1, this, sync); synchronized (sync) { int i = 0; sync.wait(10000); assertFalse("Failure occurred during read", sync.failureOccurred); assertTrue("Haven't received entries", sync.value); } LOG.debug("*** READ COMPLETE ***"); // at this point, Enumeration ls is filled with the returned // values int i = 0; while (sync.ls.hasMoreElements()) { ByteBuffer origbb = ByteBuffer.wrap(entries.get(i)); Integer origEntry = origbb.getInt(); byte[] entry = sync.ls.nextElement().getEntry(); ByteBuffer result = ByteBuffer.wrap(entry); Integer retrEntry = result.getInt(); LOG.debug("Retrieved entry: " + i); assertTrue("Checking entry " + i + " for equality", origEntry.equals(retrEntry)); assertTrue("Checking entry " + i + " for size", entry.length == entriesSize.get(i).intValue()); i++; } assertTrue("Checking number of read entries", i == numEntriesToWrite); LOG.info("Verified that entries are ok, and now closing ledger"); lh.close(); } catch (KeeperException e) { LOG.error("Caught KeeperException", e); fail(e.toString()); } catch (BKException e) { LOG.error("Caught BKException", e); fail(e.toString()); } catch (InterruptedException e) { LOG.error("Caught InterruptedException", e); fail(e.toString()); } } @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { SyncObj x = (SyncObj) ctx; if (rc != 0) { LOG.error("Failure during add {} {}", entryId, rc); x.failureOccurred = true; } synchronized (x) { x.counter++; x.notify(); } } @Override public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { SyncObj x = (SyncObj) ctx; if (rc != 0) { LOG.error("Failure during add {}", rc); x.failureOccurred = true; } synchronized (x) { x.value = true; x.ls = seq; x.notify(); } } @Before @Override public void setUp() throws Exception { super.setUp(); rng = new Random(System.currentTimeMillis()); // Initialize the Random // Number Generator entries = new ArrayList(); // initialize the entries list entriesSize = new ArrayList(); zkc.close(); } @Test(timeout=60000) public void testLedgerNoRecoveryOpenAfterBKCrashed() throws Exception { // Create a ledger LedgerHandle beforelh = bkc.createLedger(numBookies, numBookies, digestType, "".getBytes()); int numEntries = 10; String tmp = "BookKeeper is cool!"; for (int i=0; i seq = lhs[j].readEntries(start, end); assertTrue("Enumeration of ledger entries has no element", seq.hasMoreElements() == true); while (seq.hasMoreElements()) { LedgerEntry e = seq.nextElement(); assertEquals(entryId, e.getEntryId()); StringBuilder sb = new StringBuilder(); sb.append(ledgerIds[j]).append('-').append(entryId).append('-') .append(msg); Assert.assertArrayEquals(sb.toString().getBytes(), e.getEntry()); entryId++; } assertEquals(entryId - 1, end); start = end + 1; read = Math.min(numToRead, numMsgs - start); end = start + read - 1; } } } /** * This test writes enough ledger entries to roll over the journals * * It will then keep only 1 journal file before last marked journal * * @throws Exception */ @Test(timeout=60000) public void testJournalRolling() throws Exception { if (LOG.isDebugEnabled()) { LOG.debug("Testing Journal Rolling"); } // Write enough ledger entries so that we roll over journals LedgerHandle[] lhs = writeLedgerEntries(4, 1024, 1024); long[] ledgerIds = new long[lhs.length]; for (int i=0; i entries; // generated entries ArrayList entriesSize; DigestType digestType; public BookieReadWriteTest(String ledgerManagerFactory, DigestType digestType) { super(3); this.digestType = digestType; // set ledger manager baseConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); baseClientConf.setLedgerManagerFactoryClassName(ledgerManagerFactory); } class SyncObj { long lastConfirmed; volatile int counter; boolean value; AtomicInteger rc = new AtomicInteger(BKException.Code.OK); Enumeration ls = null; public SyncObj() { counter = 0; lastConfirmed = LedgerHandle.INVALID_ENTRY_ID; value = false; } void setReturnCode(int rc) { this.rc.compareAndSet(BKException.Code.OK, rc); } int getReturnCode() { return rc.get(); } void setLedgerEntries(Enumeration ls) { this.ls = ls; } Enumeration getLedgerEntries() { return ls; } } @Test(timeout=60000) public void testOpenException() throws IOException, InterruptedException { try { lh = bkc.openLedger(0, digestType, ledgerPassword); fail("Haven't thrown exception"); } catch (BKException e) { LOG.warn("Successfully thrown and caught exception:", e); } } /** * test the streaming api for reading and writing * * @throws {@link IOException} */ @Test(timeout=60000) public void testStreamingClients() throws IOException, BKException, InterruptedException { lh = bkc.createLedger(digestType, ledgerPassword); // write a string so that we cna // create a buffer of a single bytes // and check for corner cases String toWrite = "we need to check for this string to match " + "and for the record mahadev is the best"; LedgerOutputStream lout = new LedgerOutputStream(lh, 1); byte[] b = toWrite.getBytes(); lout.write(b); lout.close(); long lId = lh.getId(); lh.close(); // check for sanity lh = bkc.openLedger(lId, digestType, ledgerPassword); LedgerInputStream lin = new LedgerInputStream(lh, 1); byte[] bread = new byte[b.length]; int read = 0; while (read < b.length) { read = read + lin.read(bread, read, b.length); } String newString = new String(bread); assertTrue("these two should same", toWrite.equals(newString)); lin.close(); lh.close(); // create another ledger to write one byte at a time lh = bkc.createLedger(digestType, ledgerPassword); lout = new LedgerOutputStream(lh); for (int i = 0; i < b.length; i++) { lout.write(b[i]); } lout.close(); lId = lh.getId(); lh.close(); lh = bkc.openLedger(lId, digestType, ledgerPassword); lin = new LedgerInputStream(lh); bread = new byte[b.length]; read = 0; while (read < b.length) { read = read + lin.read(bread, read, b.length); } newString = new String(bread); assertTrue("these two should be same ", toWrite.equals(newString)); lin.close(); lh.close(); } @Test(timeout=60000) public void testReadWriteAsyncSingleClient() throws IOException { SyncObj sync = new SyncObj(); try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.asyncAddEntry(entry.array(), this, sync); } // wait for all entries to be acknowledged synchronized (sync) { while (sync.counter < numEntriesToWrite) { LOG.debug("Entries counter = " + sync.counter); sync.wait(); } assertEquals("Error adding", BKException.Code.OK, sync.getReturnCode()); } LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); // *** WRITING PART COMPLETE // READ PART BEGINS *** // open ledger lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); LOG.debug("Number of entries written: " + (lh.getLastAddConfirmed() + 1)); assertTrue("Verifying number of entries written", lh.getLastAddConfirmed() == (numEntriesToWrite - 1)); // read entries lh.asyncReadEntries(0, numEntriesToWrite - 1, this, sync); synchronized (sync) { while (sync.value == false) { sync.wait(); } assertEquals("Error reading", BKException.Code.OK, sync.getReturnCode()); } LOG.debug("*** READ COMPLETE ***"); // at this point, Enumeration ls is filled with the returned // values int i = 0; Enumeration ls = sync.getLedgerEntries(); while (ls.hasMoreElements()) { ByteBuffer origbb = ByteBuffer.wrap(entries.get(i)); Integer origEntry = origbb.getInt(); byte[] entry = ls.nextElement().getEntry(); ByteBuffer result = ByteBuffer.wrap(entry); LOG.debug("Length of result: " + result.capacity()); LOG.debug("Original entry: " + origEntry); Integer retrEntry = result.getInt(); LOG.debug("Retrieved entry: " + retrEntry); assertTrue("Checking entry " + i + " for equality", origEntry.equals(retrEntry)); assertTrue("Checking entry " + i + " for size", entry.length == entriesSize.get(i).intValue()); i++; } assertTrue("Checking number of read entries", i == numEntriesToWrite); lh.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } /** * Check that the add api with offset and length work correctly. * First try varying the offset. Then the length with a fixed non-zero * offset. */ @Test(timeout=60000) public void testReadWriteRangeAsyncSingleClient() throws IOException { SyncObj sync = new SyncObj(); try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); byte bytes[] = {'a','b','c','d','e','f','g','h','i'}; lh.asyncAddEntry(bytes, 0, bytes.length, this, sync); lh.asyncAddEntry(bytes, 0, 4, this, sync); // abcd lh.asyncAddEntry(bytes, 3, 4, this, sync); // defg lh.asyncAddEntry(bytes, 3, (bytes.length-3), this, sync); // defghi int numEntries = 4; // wait for all entries to be acknowledged synchronized (sync) { while (sync.counter < numEntries) { LOG.debug("Entries counter = " + sync.counter); sync.wait(); } assertEquals("Error adding", BKException.Code.OK, sync.getReturnCode()); } try { lh.asyncAddEntry(bytes, -1, bytes.length, this, sync); fail("Shouldn't be able to use negative offset"); } catch (ArrayIndexOutOfBoundsException aiob) { // expected } try { lh.asyncAddEntry(bytes, 0, bytes.length+1, this, sync); fail("Shouldn't be able to use that much length"); } catch (ArrayIndexOutOfBoundsException aiob) { // expected } try { lh.asyncAddEntry(bytes, -1, bytes.length+2, this, sync); fail("Shouldn't be able to use negative offset " + "with that much length"); } catch (ArrayIndexOutOfBoundsException aiob) { // expected } try { lh.asyncAddEntry(bytes, 4, -3, this, sync); fail("Shouldn't be able to use negative length"); } catch (ArrayIndexOutOfBoundsException aiob) { // expected } try { lh.asyncAddEntry(bytes, -4, -3, this, sync); fail("Shouldn't be able to use negative offset & length"); } catch (ArrayIndexOutOfBoundsException aiob) { // expected } LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); // *** WRITING PART COMPLETE // READ PART BEGINS *** // open ledger lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); LOG.debug("Number of entries written: " + (lh.getLastAddConfirmed() + 1)); assertTrue("Verifying number of entries written", lh.getLastAddConfirmed() == (numEntries - 1)); // read entries lh.asyncReadEntries(0, numEntries - 1, this, sync); synchronized (sync) { while (sync.value == false) { sync.wait(); } assertEquals("Error reading", BKException.Code.OK, sync.getReturnCode()); } LOG.debug("*** READ COMPLETE ***"); // at this point, Enumeration ls is filled with the returned // values int i = 0; Enumeration ls = sync.getLedgerEntries(); while (ls.hasMoreElements()) { byte[] expected = null; byte[] entry = ls.nextElement().getEntry(); switch (i) { case 0: expected = Arrays.copyOfRange(bytes, 0, bytes.length); break; case 1: expected = Arrays.copyOfRange(bytes, 0, 4); break; case 2: expected = Arrays.copyOfRange(bytes, 3, 3+4); break; case 3: expected = Arrays.copyOfRange(bytes, 3, 3+(bytes.length-3)); break; } assertNotNull("There are more checks than writes", expected); String message = "Checking entry " + i + " for equality [" + new String(entry, "UTF-8") + "," + new String(expected, "UTF-8") + "]"; assertTrue(message, Arrays.equals(entry, expected)); i++; } assertTrue("Checking number of read entries", i == numEntries); lh.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } class ThrottleTestCallback implements ReadCallback { int throttle; ThrottleTestCallback(int threshold) { this.throttle = threshold; } @Override public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { SyncObj sync = (SyncObj)ctx; sync.setLedgerEntries(seq); sync.setReturnCode(rc); synchronized(sync) { sync.counter += throttle; sync.notify(); } LOG.info("Current counter: " + sync.counter); } } @Test(timeout=60000) public void testSyncReadAsyncWriteStringsSingleClient() throws IOException { SyncObj sync = new SyncObj(); LOG.info("TEST READ WRITE STRINGS MIXED SINGLE CLIENT"); String charset = "utf-8"; LOG.debug("Default charset: " + Charset.defaultCharset()); try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { int randomInt = rng.nextInt(maxInt); byte[] entry = new String(Integer.toString(randomInt)).getBytes(charset); entries.add(entry); lh.asyncAddEntry(entry, this, sync); } // wait for all entries to be acknowledged synchronized (sync) { while (sync.counter < numEntriesToWrite) { LOG.debug("Entries counter = " + sync.counter); sync.wait(); } assertEquals("Error adding", BKException.Code.OK, sync.getReturnCode()); } LOG.debug("*** ASYNC WRITE COMPLETE ***"); // close ledger lh.close(); // *** WRITING PART COMPLETED // READ PART BEGINS *** // open ledger lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); LOG.debug("Number of entries written: " + (lh.getLastAddConfirmed() + 1)); assertTrue("Verifying number of entries written", lh.getLastAddConfirmed() == (numEntriesToWrite - 1)); // read entries Enumeration ls = lh.readEntries(0, numEntriesToWrite - 1); LOG.debug("*** SYNC READ COMPLETE ***"); // at this point, Enumeration ls is filled with the returned // values int i = 0; while (ls.hasMoreElements()) { byte[] origEntryBytes = entries.get(i++); byte[] retrEntryBytes = ls.nextElement().getEntry(); LOG.debug("Original byte entry size: " + origEntryBytes.length); LOG.debug("Saved byte entry size: " + retrEntryBytes.length); String origEntry = new String(origEntryBytes, charset); String retrEntry = new String(retrEntryBytes, charset); LOG.debug("Original entry: " + origEntry); LOG.debug("Retrieved entry: " + retrEntry); assertTrue("Checking entry " + i + " for equality", origEntry.equals(retrEntry)); } assertTrue("Checking number of read entries", i == numEntriesToWrite); lh.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Test(timeout=60000) public void testReadWriteSyncSingleClient() throws IOException { try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); lh.addEntry(entry.array()); } lh.close(); lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); LOG.debug("Number of entries written: " + lh.getLastAddConfirmed()); assertTrue("Verifying number of entries written", lh.getLastAddConfirmed() == (numEntriesToWrite - 1)); Enumeration ls = lh.readEntries(0, numEntriesToWrite - 1); int i = 0; while (ls.hasMoreElements()) { ByteBuffer origbb = ByteBuffer.wrap(entries.get(i++)); Integer origEntry = origbb.getInt(); ByteBuffer result = ByteBuffer.wrap(ls.nextElement().getEntry()); LOG.debug("Length of result: " + result.capacity()); LOG.debug("Original entry: " + origEntry); Integer retrEntry = result.getInt(); LOG.debug("Retrieved entry: " + retrEntry); assertTrue("Checking entry " + i + " for equality", origEntry.equals(retrEntry)); } lh.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Test(timeout=60000) public void testReadWriteZero() throws IOException { try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { lh.addEntry(new byte[0]); } /* * Write a non-zero entry */ ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); lh.addEntry(entry.array()); lh.close(); lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); LOG.debug("Number of entries written: " + lh.getLastAddConfirmed()); assertTrue("Verifying number of entries written", lh.getLastAddConfirmed() == numEntriesToWrite); Enumeration ls = lh.readEntries(0, numEntriesToWrite - 1); int i = 0; while (ls.hasMoreElements()) { ByteBuffer result = ByteBuffer.wrap(ls.nextElement().getEntry()); LOG.debug("Length of result: " + result.capacity()); assertTrue("Checking if entry " + i + " has zero bytes", result.capacity() == 0); } lh.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Test(timeout=60000) public void testMultiLedger() throws IOException { try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); lh2 = bkc.createLedger(digestType, ledgerPassword); long ledgerId = lh.getId(); long ledgerId2 = lh2.getId(); // bkc.initMessageDigest("SHA1"); LOG.info("Ledger ID 1: " + lh.getId() + ", Ledger ID 2: " + lh2.getId()); for (int i = 0; i < numEntriesToWrite; i++) { lh.addEntry(new byte[0]); lh2.addEntry(new byte[0]); } lh.close(); lh2.close(); lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); lh2 = bkc.openLedger(ledgerId2, digestType, ledgerPassword); LOG.debug("Number of entries written: " + lh.getLastAddConfirmed() + ", " + lh2.getLastAddConfirmed()); assertTrue("Verifying number of entries written lh (" + lh.getLastAddConfirmed() + ")", lh .getLastAddConfirmed() == (numEntriesToWrite - 1)); assertTrue("Verifying number of entries written lh2 (" + lh2.getLastAddConfirmed() + ")", lh2 .getLastAddConfirmed() == (numEntriesToWrite - 1)); Enumeration ls = lh.readEntries(0, numEntriesToWrite - 1); int i = 0; while (ls.hasMoreElements()) { ByteBuffer result = ByteBuffer.wrap(ls.nextElement().getEntry()); LOG.debug("Length of result: " + result.capacity()); assertTrue("Checking if entry " + i + " has zero bytes", result.capacity() == 0); } lh.close(); ls = lh2.readEntries(0, numEntriesToWrite - 1); i = 0; while (ls.hasMoreElements()) { ByteBuffer result = ByteBuffer.wrap(ls.nextElement().getEntry()); LOG.debug("Length of result: " + result.capacity()); assertTrue("Checking if entry " + i + " has zero bytes", result.capacity() == 0); } lh2.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Test(timeout=60000) public void testReadWriteAsyncLength() throws IOException { SyncObj sync = new SyncObj(); try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.asyncAddEntry(entry.array(), this, sync); } // wait for all entries to be acknowledged synchronized (sync) { while (sync.counter < numEntriesToWrite) { LOG.debug("Entries counter = " + sync.counter); sync.wait(); } assertEquals("Error adding", BKException.Code.OK, sync.getReturnCode()); } long length = numEntriesToWrite * 4; assertTrue("Ledger length before closing: " + lh.getLength(), lh.getLength() == length); LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); // *** WRITING PART COMPLETE // READ PART BEGINS *** // open ledger lh = bkc.openLedger(ledgerId, digestType, ledgerPassword); assertTrue("Ledger length after opening: " + lh.getLength(), lh.getLength() == length); lh.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Test(timeout=60000) public void testReadFromOpenLedger() throws IOException { try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.addEntry(entry.array()); if(i == numEntriesToWrite/2) { LedgerHandle lhOpen = bkc.openLedgerNoRecovery(ledgerId, digestType, ledgerPassword); // no recovery opened ledger 's last confirmed entry id is less than written // and it just can read until (i-1) int toRead = i - 1; Enumeration readEntry = lhOpen.readEntries(toRead, toRead); assertTrue("Enumeration of ledger entries has no element", readEntry.hasMoreElements() == true); LedgerEntry e = readEntry.nextElement(); assertEquals(toRead, e.getEntryId()); Assert.assertArrayEquals(entries.get(toRead), e.getEntry()); // should not written to a read only ledger try { lhOpen.addEntry(entry.array()); fail("Should have thrown an exception here"); } catch (BKException.BKIllegalOpException bkioe) { // this is the correct response } catch (Exception ex) { LOG.error("Unexpected exception", ex); fail("Unexpected exception"); } // close read only ledger should not change metadata lhOpen.close(); } } long last = lh.readLastConfirmed(); assertTrue("Last confirmed add: " + last, last == (numEntriesToWrite - 2)); LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); /* * Asynchronous call to read last confirmed entry */ lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.addEntry(entry.array()); } SyncObj sync = new SyncObj(); lh.asyncReadLastConfirmed(this, sync); // Wait for for last confirmed synchronized (sync) { while (sync.lastConfirmed == -1) { LOG.debug("Counter = " + sync.lastConfirmed); sync.wait(); } assertEquals("Error reading", BKException.Code.OK, sync.getReturnCode()); } assertTrue("Last confirmed add: " + sync.lastConfirmed, sync.lastConfirmed == (numEntriesToWrite - 2)); LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Test(timeout=60000) public void testReadFromOpenLedgerOpenOnce() throws Exception { try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); LedgerHandle lhOpen = bkc.openLedgerNoRecovery(ledgerId, digestType, ledgerPassword); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.addEntry(entry.array()); if (i == numEntriesToWrite / 2) { // no recovery opened ledger 's last confirmed entry id is // less than written // and it just can read until (i-1) int toRead = i - 1; long readLastConfirmed = lhOpen.readLastConfirmed(); assertTrue(readLastConfirmed != 0); Enumeration readEntry = lhOpen.readEntries(toRead, toRead); assertTrue("Enumeration of ledger entries has no element", readEntry.hasMoreElements() == true); LedgerEntry e = readEntry.nextElement(); assertEquals(toRead, e.getEntryId()); Assert.assertArrayEquals(entries.get(toRead), e.getEntry()); // should not written to a read only ledger try { lhOpen.addEntry(entry.array()); fail("Should have thrown an exception here"); } catch (BKException.BKIllegalOpException bkioe) { // this is the correct response } catch (Exception ex) { LOG.error("Unexpected exception", ex); fail("Unexpected exception"); } } } long last = lh.readLastConfirmed(); assertTrue("Last confirmed add: " + last, last == (numEntriesToWrite - 2)); LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); // close read only ledger should not change metadata lhOpen.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Test(timeout=60000) public void testReadFromOpenLedgerZeroAndOne() throws Exception { try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); LedgerHandle lhOpen = bkc.openLedgerNoRecovery(ledgerId, digestType, ledgerPassword); /* * We haven't written anything, so it should be empty. */ LOG.debug("Checking that it is empty"); long readLastConfirmed = lhOpen.readLastConfirmed(); assertTrue("Last confirmed has the wrong value", readLastConfirmed == LedgerHandle.INVALID_ENTRY_ID); /* * Writing one entry. */ LOG.debug("Going to write one entry"); ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.addEntry(entry.array()); /* * The hint should still indicate that there is no confirmed * add. */ LOG.debug("Checking that it is still empty even after writing one entry"); readLastConfirmed = lhOpen.readLastConfirmed(); assertTrue(readLastConfirmed == LedgerHandle.INVALID_ENTRY_ID); /* * Adding one more, and this time we should expect to * see one entry. */ entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.addEntry(entry.array()); LOG.info("Checking that it has an entry"); readLastConfirmed = lhOpen.readLastConfirmed(); assertTrue(readLastConfirmed == 0L); // close ledger lh.close(); // close read only ledger should not change metadata lhOpen.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Test(timeout=60000) public void testLastConfirmedAdd() throws IOException { try { // Create a ledger lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.addEntry(entry.array()); } long last = lh.readLastConfirmed(); assertTrue("Last confirmed add: " + last, last == (numEntriesToWrite - 2)); LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); /* * Asynchronous call to read last confirmed entry */ lh = bkc.createLedger(digestType, ledgerPassword); // bkc.initMessageDigest("SHA1"); ledgerId = lh.getId(); LOG.info("Ledger ID: " + lh.getId()); for (int i = 0; i < numEntriesToWrite; i++) { ByteBuffer entry = ByteBuffer.allocate(4); entry.putInt(rng.nextInt(maxInt)); entry.position(0); entries.add(entry.array()); entriesSize.add(entry.array().length); lh.addEntry(entry.array()); } SyncObj sync = new SyncObj(); lh.asyncReadLastConfirmed(this, sync); // Wait for for last confirmed synchronized (sync) { while (sync.lastConfirmed == LedgerHandle.INVALID_ENTRY_ID) { LOG.debug("Counter = " + sync.lastConfirmed); sync.wait(); } assertEquals("Error reading", BKException.Code.OK, sync.getReturnCode()); } assertTrue("Last confirmed add: " + sync.lastConfirmed, sync.lastConfirmed == (numEntriesToWrite - 2)); LOG.debug("*** WRITE COMPLETE ***"); // close ledger lh.close(); } catch (BKException e) { LOG.error("Test failed", e); fail("Test failed due to BookKeeper exception"); } catch (InterruptedException e) { LOG.error("Test failed", e); fail("Test failed due to interruption"); } } @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { SyncObj sync = (SyncObj) ctx; sync.setReturnCode(rc); synchronized (sync) { sync.counter++; sync.notify(); } } @Override public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { SyncObj sync = (SyncObj) ctx; sync.setLedgerEntries(seq); sync.setReturnCode(rc); synchronized (sync) { sync.value = true; sync.notify(); } } @Override public void readLastConfirmedComplete(int rc, long lastConfirmed, Object ctx) { SyncObj sync = (SyncObj) ctx; sync.setReturnCode(rc); synchronized(sync) { sync.lastConfirmed = lastConfirmed; sync.notify(); } } @Override @Before public void setUp() throws Exception { super.setUp(); rng = new Random(System.currentTimeMillis()); // Initialize the Random // Number Generator entries = new ArrayList(); // initialize the entries list entriesSize = new ArrayList(); } /* Clean up a directory recursively */ protected boolean cleanUpDir(File dir) { if (dir.isDirectory()) { LOG.info("Cleaning up " + dir.getName()); String[] children = dir.list(); for (String string : children) { boolean success = cleanUpDir(new File(dir, string)); if (!success) return false; } } // The directory is now empty so delete it return dir.delete(); } /* User for testing purposes, void */ class emptyWatcher implements Watcher { @Override public void process(WatchedEvent event) { } } } BookieZKExpireTest.java000066400000000000000000000077261244507361200350210ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.io.File; import org.junit.Test; import org.junit.Before; import org.junit.After; import static org.junit.Assert.*; import org.apache.bookkeeper.conf.ServerConfiguration; import java.util.HashSet; import junit.framework.TestCase; import org.apache.bookkeeper.proto.BookieServer; import org.apache.bookkeeper.bookie.Bookie; public class BookieZKExpireTest extends BookKeeperClusterTestCase { public BookieZKExpireTest() { super(0); // 6000 is minimum due to default tick time baseConf.setZkTimeout(6000); baseClientConf.setZkTimeout(6000); } @Test(timeout=60000) public void testBookieServerZKExpireBehaviour() throws Exception { BookieServer server = null; try { File f = File.createTempFile("bookieserver", "test"); f.delete(); f.mkdir(); HashSet threadset = new HashSet(); int threadCount = Thread.activeCount(); Thread threads[] = new Thread[threadCount*2]; threadCount = Thread.enumerate(threads); for(int i = 0; i < threadCount; i++) { if (threads[i].getName().indexOf("SendThread") != -1) { threadset.add(threads[i]); } } ServerConfiguration conf = newServerConfiguration(PortManager.nextFreePort(), zkUtil.getZooKeeperConnectString(), f, new File[] { f }); server = new BookieServer(conf); server.start(); int secondsToWait = 5; while (!server.isRunning()) { Thread.sleep(1000); if (secondsToWait-- <= 0) { fail("Bookie never started"); } } Thread sendthread = null; threadCount = Thread.activeCount(); threads = new Thread[threadCount*2]; threadCount = Thread.enumerate(threads); for(int i = 0; i < threadCount; i++) { if (threads[i].getName().indexOf("SendThread") != -1 && !threadset.contains(threads[i])) { sendthread = threads[i]; break; } } assertNotNull("Send thread not found", sendthread); sendthread.suspend(); Thread.sleep(2*conf.getZkTimeout()); sendthread.resume(); // allow watcher thread to run secondsToWait = 20; while (server.isBookieRunning() || server.isNioServerRunning() || server.isRunning()) { Thread.sleep(1000); if (secondsToWait-- <= 0) { break; } } assertFalse("Bookie should have shutdown on losing zk session", server.isBookieRunning()); assertFalse("Nio Server should have shutdown on losing zk session", server.isNioServerRunning()); assertFalse("Bookie Server should have shutdown on losing zk session", server.isRunning()); } finally { server.shutdown(); } } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/CloseTest.java000066400000000000000000000054561244507361200333110ustar00rootroot00000000000000package org.apache.bookkeeper.test; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import org.junit.*; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This unit test tests closing ledgers sequentially. It creates 4 ledgers, then * write 1000 entries to each ledger and close it. * */ public class CloseTest extends BaseTestCase { static Logger LOG = LoggerFactory.getLogger(CloseTest.class); DigestType digestType; public CloseTest(DigestType digestType) { super(3); this.digestType = digestType; } @Test(timeout=60000) public void testClose() throws Exception { /* * Create 4 ledgers. */ int numLedgers = 4; int numMsgs = 100; LedgerHandle[] lh = new LedgerHandle[numLedgers]; for (int i = 0; i < numLedgers; i++) { lh[i] = bkc.createLedger(digestType, "".getBytes()); } String tmp = "BookKeeper is cool!"; /* * Write 1000 entries to lh1. */ for (int i = 0; i < numMsgs; i++) { for (int j = 0; j < numLedgers; j++) { lh[j].addEntry(tmp.getBytes()); } } for (int i = 0; i < numLedgers; i++) { lh[i].close(); } } @Test(timeout=60000) public void testCloseByOthers() throws Exception { int numLedgers = 1; int numMsgs = 10; LedgerHandle lh = bkc.createLedger(digestType, "".getBytes()); String tmp = "BookKeeper is cool!"; /* * Write 10 entries to lh. */ for (int i = 0; i < numMsgs; i++) { lh.addEntry(tmp.getBytes()); } // other one close the entries LedgerHandle lh2 = bkc.openLedger(lh.getId(), digestType, "".getBytes()); // so the ledger would be closed, the metadata is changed // the original ledger handle should be able to close it successfully lh2.close(); } } ConcurrentLedgerTest.java000066400000000000000000000150411244507361200354210ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.io.File; import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.concurrent.Semaphore; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.junit.After; import org.junit.Before; import org.junit.Test; import junit.framework.TestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests writing to concurrent ledgers */ public class ConcurrentLedgerTest extends TestCase { static Logger LOG = LoggerFactory.getLogger(ConcurrentLedgerTest.class); Bookie bookie; File txnDir, ledgerDir; int recvTimeout = 10000; Semaphore throttle; ServerConfiguration conf; @Override @Before public void setUp() throws Exception { String txnDirName = System.getProperty("txnDir"); if (txnDirName != null) { txnDir = new File(txnDirName); } String ledgerDirName = System.getProperty("ledgerDir"); if (ledgerDirName != null) { ledgerDir = new File(ledgerDirName); } File tmpFile = File.createTempFile("book", ".txn", txnDir); tmpFile.delete(); txnDir = new File(tmpFile.getParent(), tmpFile.getName()+".dir"); txnDir.mkdirs(); tmpFile = File.createTempFile("book", ".ledger", ledgerDir); ledgerDir = new File(tmpFile.getParent(), tmpFile.getName()+".dir"); ledgerDir.mkdirs(); conf = new ServerConfiguration(); conf.setAllowLoopback(true); conf.setBookiePort(5000); conf.setZkServers(null); conf.setJournalDirName(txnDir.getPath()); conf.setLedgerDirNames(new String[] { ledgerDir.getPath() }); bookie = new Bookie(conf); bookie.start(); } static void recursiveDelete(File f) { if (f.isFile()) { f.delete(); } else { for(File i: f.listFiles()) { recursiveDelete(i); } f.delete(); } } @Override @After public void tearDown() { bookie.shutdown(); recursiveDelete(txnDir); recursiveDelete(ledgerDir); } byte zeros[] = new byte[16]; int iterations = 51; { String iterationsString = System.getProperty("iterations"); if (iterationsString != null) { iterations = Integer.parseInt(iterationsString); } } int iterationStep = 25; { String iterationsString = System.getProperty("iterationStep"); if (iterationsString != null) { iterationStep = Integer.parseInt(iterationsString); } } @Test(timeout=60000) public void testConcurrentWrite() throws IOException, InterruptedException, BookieException { int size = 1024; int totalwrites = 128; if (System.getProperty("totalwrites") != null) { totalwrites = Integer.parseInt(System.getProperty("totalwrites")); } LOG.info("Running up to " + iterations + " iterations"); LOG.info("Total writes = " + totalwrites); int ledgers; for(ledgers = 1; ledgers <= iterations; ledgers += iterationStep) { long duration = doWrites(ledgers, size, totalwrites); LOG.info(totalwrites + " on " + ledgers + " took " + duration + " ms"); } LOG.info("ledgers " + ledgers); for(ledgers = 1; ledgers <= iterations; ledgers += iterationStep) { long duration = doReads(ledgers, size, totalwrites); LOG.info(ledgers + " read " + duration + " ms"); } } private long doReads(int ledgers, int size, int totalwrites) throws IOException, InterruptedException, BookieException { long start = System.currentTimeMillis(); for(int i = 1; i <= totalwrites/ledgers; i++) { for(int j = 1; j <= ledgers; j++) { ByteBuffer entry = bookie.readEntry(j, i); // skip the ledger id and the entry id entry.getLong(); entry.getLong(); assertEquals(j + "@" + i, j+2, entry.getLong()); assertEquals(j + "@" + i, i+3, entry.getLong()); } } long finish = System.currentTimeMillis(); return finish - start; } private long doWrites(int ledgers, int size, int totalwrites) throws IOException, InterruptedException, BookieException { throttle = new Semaphore(10000); WriteCallback cb = new WriteCallback() { @Override public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { AtomicInteger counter = (AtomicInteger)ctx; counter.getAndIncrement(); throttle.release(); } }; AtomicInteger counter = new AtomicInteger(); long start = System.currentTimeMillis(); for(int i = 1; i <= totalwrites/ledgers; i++) { for(int j = 1; j <= ledgers; j++) { ByteBuffer bytes = ByteBuffer.allocate(size); bytes.putLong(j); bytes.putLong(i); bytes.putLong(j+2); bytes.putLong(i+3); bytes.put(("This is ledger " + j + " entry " + i).getBytes()); bytes.position(0); bytes.limit(bytes.capacity()); throttle.acquire(); bookie.addEntry(bytes, cb, counter, zeros); } } long finish = System.currentTimeMillis(); return finish - start; } } ConditionalSetTest.java000066400000000000000000000075521244507361200351030ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.io.File; import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Random; import java.util.concurrent.Semaphore; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.client.BookKeeperTestClient; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.zookeeper.KeeperException; import org.junit.After; import org.junit.Before; import org.junit.Test; import junit.framework.TestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Tests conditional set of the ledger metadata znode. */ public class ConditionalSetTest extends BaseTestCase { static Logger LOG = LoggerFactory.getLogger(ConditionalSetTest.class); byte[] entry; DigestType digestType; BookKeeper bkcReader; public ConditionalSetTest(DigestType digestType) { super(3); this.digestType = digestType; } @Override @Before public void setUp() throws IOException, Exception { super.setUp(); entry = new byte[10]; // initialize the entries list this.bkcReader = new BookKeeperTestClient(baseClientConf); } /** * Opens a ledger before the ledger writer, which triggers ledger recovery. * When the ledger writer tries to close the ledger, the close operation * should fail. * * * @throws IOException * @throws InterruptedException * @throws BKException * @throws KeeperException */ @Test(timeout=60000) public void testConditionalSet() throws IOException, InterruptedException, BKException, KeeperException { LedgerHandle lhWrite = bkc.createLedger(digestType, new byte[] { 'a', 'b' }); long ledgerId = lhWrite.getId(); LOG.debug("Ledger ID: " + lhWrite.getId()); for (int i = 0; i < 10; i++) { LOG.debug("Adding entry: " + i); lhWrite.addEntry(entry); } /* * Open a ledger for reading, which triggers recovery, since the ledger * is still open. */ LOG.debug("Instantiating new bookkeeper client."); LedgerHandle lhRead = bkcReader.openLedger(lhWrite.getId(), digestType, new byte[] { 'a', 'b' }); LOG.debug("Opened the ledger already"); /* * Writer tries to close the ledger, and if should fail. */ try{ lhWrite.close(); fail("Should have received an exception when trying to close the ledger."); } catch (BKException e) { /* * Correctly failed to close the ledger */ } } } ConfigurationTest.java000066400000000000000000000051321244507361200347630ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.conf.ClientConfiguration; import junit.framework.TestCase; import org.junit.Test; public class ConfigurationTest extends TestCase { @Test(timeout=60000) public void testConfigurationOverwrite() { System.clearProperty("zkServers"); ServerConfiguration conf = new ServerConfiguration(); assertEquals(null, conf.getZkServers()); // override setting from property System.setProperty("zkServers", "server1"); // it affects previous created configurations, if the setting is not overwrite assertEquals("server1", conf.getZkServers()); ServerConfiguration conf2 = new ServerConfiguration(); assertEquals("server1", conf2.getZkServers()); System.clearProperty("zkServers"); // load other configuration ServerConfiguration newConf = new ServerConfiguration(); assertEquals(null, newConf.getZkServers()); newConf.setZkServers("newserver"); assertEquals("newserver", newConf.getZkServers()); conf2.loadConf(newConf); assertEquals("newserver", conf2.getZkServers()); } @Test(timeout=60000) public void testGetZkServers() { System.setProperty("zkServers", "server1:port1,server2:port2"); ServerConfiguration conf = new ServerConfiguration(); ClientConfiguration clientConf = new ClientConfiguration(); assertEquals("zookeeper connect string doesn't match in server configuration", "server1:port1,server2:port2", conf.getZkServers()); assertEquals("zookeeper connect string doesn't match in client configuration", "server1:port1,server2:port2", clientConf.getZkServers()); } } LedgerCreateDeleteTest.java000066400000000000000000000061311244507361200356250ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/testpackage org.apache.bookkeeper.test; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import static org.junit.Assert.fail; import java.util.ArrayList; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.junit.Before; import org.junit.Test; /** * Test Create/Delete ledgers */ public class LedgerCreateDeleteTest extends BookKeeperClusterTestCase { public LedgerCreateDeleteTest() { super(1); } @Override @Before public void setUp() throws Exception { baseConf.setOpenFileLimit(1); super.setUp(); } @Test(timeout=60000) public void testCreateDeleteLedgers() throws Exception { int numLedgers = 3; ArrayList ledgers = new ArrayList(); for (int i=0; i configs() { String[] ledgerManagers = { "org.apache.bookkeeper.meta.FlatLedgerManagerFactory", "org.apache.bookkeeper.meta.HierarchicalLedgerManagerFactory", "org.apache.bookkeeper.meta.MSLedgerManagerFactory", }; ArrayList cfgs = new ArrayList(ledgerManagers.length); DigestType[] digestTypes = new DigestType[] { DigestType.MAC, DigestType.CRC32 }; for (String lm : ledgerManagers) { for (DigestType type : digestTypes) { cfgs.add(new Object[] { lm, type }); } } return cfgs; } } MultiLedgerManagerTestCase.java000066400000000000000000000035501244507361200364620ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/testpackage org.apache.bookkeeper.test; import java.util.ArrayList; import java.util.Collection; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import org.apache.bookkeeper.meta.LedgerManagerFactory; /** * Test Case run over different ledger manager. */ @RunWith(Parameterized.class) public abstract class MultiLedgerManagerTestCase extends BookKeeperClusterTestCase { public MultiLedgerManagerTestCase(int numBookies) { super(numBookies); } @Parameters public static Collection configs() { String[] ledgerManagers = new String[] { "org.apache.bookkeeper.meta.FlatLedgerManagerFactory", "org.apache.bookkeeper.meta.HierarchicalLedgerManagerFactory", "org.apache.bookkeeper.meta.MSLedgerManagerFactory", }; ArrayList cfgs = new ArrayList(ledgerManagers.length); for (String lm : ledgerManagers) { cfgs.add(new Object[] { lm }); } return cfgs; } } NIOServerFactoryTest.java000066400000000000000000000044401244507361200353210ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/testpackage org.apache.bookkeeper.test; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.net.Socket; import java.nio.ByteBuffer; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.proto.NIOServerFactory; import org.apache.bookkeeper.proto.NIOServerFactory.Cnxn; import org.apache.bookkeeper.proto.NIOServerFactory.PacketProcessor; import org.junit.Test; import junit.framework.TestCase; public class NIOServerFactoryTest extends TestCase { PacketProcessor problemProcessor = new PacketProcessor() { public void processPacket(ByteBuffer packet, Cnxn src) { if (packet.getInt() == 1) { throw new RuntimeException("Really bad thing happened"); } src.sendResponse(new ByteBuffer[] { ByteBuffer.allocate(4) }); } }; @Test(timeout=60000) public void testProblemProcessor() throws Exception { ServerConfiguration conf = new ServerConfiguration(); conf.setAllowLoopback(true); int port = PortManager.nextFreePort(); conf.setBookiePort(port); NIOServerFactory factory = new NIOServerFactory(conf, problemProcessor); factory.start(); Socket s = new Socket("127.0.0.1", port); s.setSoTimeout(5000); try { s.getOutputStream().write("\0\0\0\4\0\0\0\1".getBytes()); s.getOutputStream().write("\0\0\0\4\0\0\0\2".getBytes()); s.getInputStream().read(); } finally { s.close(); factory.shutdown(); } } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/PortManager.java000066400000000000000000000035141244507361200336140ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.net.ServerSocket; import java.io.IOException; /** * Port manager allows a base port to be specified on the commandline. * Tests will then use ports, counting up from this base port. * This allows multiple instances of the bookkeeper tests to run at once. */ public class PortManager { private static int nextPort = getBasePort(); public synchronized static int nextFreePort() { while (true) { ServerSocket ss = null; try { int port = nextPort++; ss = new ServerSocket(port); ss.setReuseAddress(true); return port; } catch (IOException ioe) { } finally { if (ss != null) { try { ss.close(); } catch (IOException ioe) {} } } } } private static int getBasePort() { return Integer.valueOf(System.getProperty("test.basePort", "15000")); } }ReadOnlyBookieTest.java000066400000000000000000000201531244507361200350220ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.io.File; import java.util.Enumeration; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.LedgerDirsManager; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.conf.ServerConfiguration; /** * Test to verify the readonly feature of bookies */ public class ReadOnlyBookieTest extends BookKeeperClusterTestCase { public ReadOnlyBookieTest() { super(2); } /** * Check readonly bookie */ public void testBookieShouldServeAsReadOnly() throws Exception { killBookie(0); baseConf.setReadOnlyModeEnabled(true); startNewBookie(); LedgerHandle ledger = bkc.createLedger(2, 2, DigestType.MAC, "".getBytes()); // Check new bookie with readonly mode enabled. File[] ledgerDirs = bsConfs.get(1).getLedgerDirs(); assertEquals("Only one ledger dir should be present", 1, ledgerDirs.length); Bookie bookie = bs.get(1).getBookie(); LedgerDirsManager ledgerDirsManager = bookie.getLedgerDirsManager(); for (int i = 0; i < 10; i++) { ledger.addEntry("data".getBytes()); } // Now add the current ledger dir to filled dirs list ledgerDirsManager.addToFilledDirs(new File(ledgerDirs[0], "current")); try { ledger.addEntry("data".getBytes()); } catch (BKException.BKNotEnoughBookiesException e) { // Expected } assertTrue("Bookie should be running and converted to readonly mode", bookie.isRunning() && bookie.isReadOnly()); // Now kill the other bookie and read entries from the readonly bookie killBookie(0); Enumeration readEntries = ledger.readEntries(0, 9); while (readEntries.hasMoreElements()) { LedgerEntry entry = readEntries.nextElement(); assertEquals("Entry should contain correct data", "data", new String(entry.getEntry())); } } /** * check readOnlyModeEnabled=false */ public void testBookieShutdownIfReadOnlyModeNotEnabled() throws Exception { File[] ledgerDirs = bsConfs.get(1).getLedgerDirs(); assertEquals("Only one ledger dir should be present", 1, ledgerDirs.length); Bookie bookie = bs.get(1).getBookie(); LedgerHandle ledger = bkc.createLedger(2, 2, DigestType.MAC, "".getBytes()); LedgerDirsManager ledgerDirsManager = bookie.getLedgerDirsManager(); for (int i = 0; i < 10; i++) { ledger.addEntry("data".getBytes()); } // Now add the current ledger dir to filled dirs list ledgerDirsManager.addToFilledDirs(new File(ledgerDirs[0], "current")); try { ledger.addEntry("data".getBytes()); } catch (BKException.BKNotEnoughBookiesException e) { // Expected } // wait for up to 10 seconds for bookie to shut down for (int i = 0; i < 10 && bookie.isAlive(); i++) { Thread.sleep(1000); } assertFalse("Bookie should shutdown if readOnlyMode not enabled", bookie.isAlive()); } /** * Check multiple ledger dirs */ public void testBookieContinueWritingIfMultipleLedgersPresent() throws Exception { startNewBookieWithMultipleLedgerDirs(2); File[] ledgerDirs = bsConfs.get(1).getLedgerDirs(); assertEquals("Only one ledger dir should be present", 2, ledgerDirs.length); Bookie bookie = bs.get(1).getBookie(); LedgerHandle ledger = bkc.createLedger(2, 2, DigestType.MAC, "".getBytes()); LedgerDirsManager ledgerDirsManager = bookie.getLedgerDirsManager(); for (int i = 0; i < 10; i++) { ledger.addEntry("data".getBytes()); } // Now add the current ledger dir to filled dirs list ledgerDirsManager.addToFilledDirs(new File(ledgerDirs[0], "current")); for (int i = 0; i < 10; i++) { ledger.addEntry("data".getBytes()); } assertEquals("writable dirs should have one dir", 1, ledgerDirsManager .getWritableLedgerDirs().size()); assertTrue("Bookie should shutdown if readOnlyMode not enabled", bookie.isAlive()); } private void startNewBookieWithMultipleLedgerDirs(int numOfLedgerDirs) throws Exception { ServerConfiguration conf = bsConfs.get(1); killBookie(1); File[] ledgerDirs = new File[numOfLedgerDirs]; for (int i = 0; i < numOfLedgerDirs; i++) { File dir = File.createTempFile("bookie", "test"); tmpDirs.add(dir); dir.delete(); dir.mkdir(); ledgerDirs[i] = dir; } ServerConfiguration newConf = newServerConfiguration( conf.getBookiePort() + 1, zkUtil.getZooKeeperConnectString(), ledgerDirs[0], ledgerDirs); bsConfs.add(newConf); bs.add(startBookie(newConf)); } /** * Test ledger creation with readonly bookies */ public void testLedgerCreationShouldFailWithReadonlyBookie() throws Exception { killBookie(1); baseConf.setReadOnlyModeEnabled(true); startNewBookie(); bs.get(1).getBookie().transitionToReadOnlyMode(); try { bkc.readBookiesBlocking(); bkc.createLedger(2, 2, DigestType.CRC32, "".getBytes()); fail("Must throw exception, as there is one readonly bookie"); } catch (BKException e) { // Expected } } /** * Try to read closed ledger from restarted ReadOnlyBookie. */ public void testReadFromReadOnlyBookieShouldBeSuccess() throws Exception { LedgerHandle ledger = bkc.createLedger(2, 2, DigestType.MAC, "".getBytes()); for (int i = 0; i < 10; i++) { ledger.addEntry("data".getBytes()); } ledger.close(); bsConfs.get(1).setReadOnlyModeEnabled(true); bsConfs.get(1).setDiskCheckInterval(500); restartBookies(); // Check new bookie with readonly mode enabled. File[] ledgerDirs = bsConfs.get(1).getLedgerDirs(); assertEquals("Only one ledger dir should be present", 1, ledgerDirs.length); Bookie bookie = bs.get(1).getBookie(); LedgerDirsManager ledgerDirsManager = bookie.getLedgerDirsManager(); // Now add the current ledger dir to filled dirs list ledgerDirsManager.addToFilledDirs(new File(ledgerDirs[0], "current")); // Wait till Bookie converts to ReadOnly mode. Thread.sleep(1000); assertTrue("Bookie should be converted to readonly mode", bookie.isRunning() && bookie.isReadOnly()); // Now kill the other bookie and read entries from the readonly bookie killBookie(0); Enumeration readEntries = ledger.readEntries(0, 9); while (readEntries.hasMoreElements()) { LedgerEntry entry = readEntries.nextElement(); assertEquals("Entry should contain correct data", "data", new String(entry.getEntry())); } } } TestBackwardCompat.java000066400000000000000000000516101244507361200350400ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.io.File; import java.util.Enumeration; import java.util.Arrays; import java.net.InetAddress; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.Test; import org.junit.Before; import org.junit.After; import static org.junit.Assert.*; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.bookie.FileSystemUpgrade; import org.apache.bookkeeper.client.BookKeeperAdmin; import org.apache.bookkeeper.conf.ClientConfiguration; public class TestBackwardCompat { static Logger LOG = LoggerFactory.getLogger(TestBackwardCompat.class); private static ZooKeeperUtil zkUtil = new ZooKeeperUtil();; private static byte[] ENTRY_DATA = "ThisIsAnEntry".getBytes(); static void waitUp(int port) throws Exception { while(zkUtil.getZooKeeperClient().exists( "/ledgers/available/" + InetAddress.getLocalHost().getHostAddress() + ":" + port, false) == null) { Thread.sleep(500); } } @Before public void startZooKeeperServer() throws Exception { zkUtil.startServer(); } @After public void stopZooKeeperServer() throws Exception { zkUtil.killServer(); } /** * Version 4.0.0 classes */ static class Server400 { org.apache.bk_v4_0_0.bookkeeper.conf.ServerConfiguration conf; org.apache.bk_v4_0_0.bookkeeper.proto.BookieServer server = null; Server400(File journalDir, File ledgerDir, int port) throws Exception { conf = new org.apache.bk_v4_0_0.bookkeeper.conf.ServerConfiguration(); conf.setBookiePort(port); conf.setZkServers(zkUtil.getZooKeeperConnectString()); conf.setJournalDirName(journalDir.getPath()); conf.setLedgerDirNames(new String[] { ledgerDir.getPath() }); } void start() throws Exception { server = new org.apache.bk_v4_0_0.bookkeeper.proto.BookieServer(conf); server.start(); waitUp(conf.getBookiePort()); } org.apache.bk_v4_0_0.bookkeeper.conf.ServerConfiguration getConf() { return conf; } void stop() throws Exception { if (server != null) { server.shutdown(); } } } static class Ledger400 { org.apache.bk_v4_0_0.bookkeeper.client.BookKeeper bk; org.apache.bk_v4_0_0.bookkeeper.client.LedgerHandle lh; private Ledger400(org.apache.bk_v4_0_0.bookkeeper.client.BookKeeper bk, org.apache.bk_v4_0_0.bookkeeper.client.LedgerHandle lh) { this.bk = bk; this.lh = lh; } static Ledger400 newLedger() throws Exception { org.apache.bk_v4_0_0.bookkeeper.client.BookKeeper newbk = new org.apache.bk_v4_0_0.bookkeeper.client.BookKeeper(zkUtil.getZooKeeperConnectString()); org.apache.bk_v4_0_0.bookkeeper.client.LedgerHandle newlh = newbk.createLedger(1, 1, org.apache.bk_v4_0_0.bookkeeper.client.BookKeeper.DigestType.CRC32, "foobar".getBytes()); return new Ledger400(newbk, newlh); } static Ledger400 openLedger(long id) throws Exception { org.apache.bk_v4_0_0.bookkeeper.client.BookKeeper newbk = new org.apache.bk_v4_0_0.bookkeeper.client.BookKeeper(zkUtil.getZooKeeperConnectString()); org.apache.bk_v4_0_0.bookkeeper.client.LedgerHandle newlh = newbk.openLedger(id, org.apache.bk_v4_0_0.bookkeeper.client.BookKeeper.DigestType.CRC32, "foobar".getBytes()); return new Ledger400(newbk, newlh); } long getId() { return lh.getId(); } void write100() throws Exception { for (int i = 0; i < 100; i++) { lh.addEntry(ENTRY_DATA); } } long readAll() throws Exception { long count = 0; Enumeration entries = lh.readEntries(0, lh.getLastAddConfirmed()); while (entries.hasMoreElements()) { assertTrue("entry data doesn't match", Arrays.equals(entries.nextElement().getEntry(), ENTRY_DATA)); count++; } return count; } void close() throws Exception { try { if (lh != null) { lh.close(); } } finally { if (bk != null) { bk.close(); } } } } /** * Version 4.1.0 classes */ static class Server410 { org.apache.bk_v4_1_0.bookkeeper.conf.ServerConfiguration conf; org.apache.bk_v4_1_0.bookkeeper.proto.BookieServer server = null; Server410(File journalDir, File ledgerDir, int port) throws Exception { conf = new org.apache.bk_v4_1_0.bookkeeper.conf.ServerConfiguration(); conf.setBookiePort(port); conf.setZkServers(zkUtil.getZooKeeperConnectString()); conf.setJournalDirName(journalDir.getPath()); conf.setLedgerDirNames(new String[] { ledgerDir.getPath() }); } void start() throws Exception { server = new org.apache.bk_v4_1_0.bookkeeper.proto.BookieServer(conf); server.start(); waitUp(conf.getBookiePort()); } org.apache.bk_v4_1_0.bookkeeper.conf.ServerConfiguration getConf() { return conf; } void stop() throws Exception { if (server != null) { server.shutdown(); } } } static class Ledger410 { org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper bk; org.apache.bk_v4_1_0.bookkeeper.client.LedgerHandle lh; private Ledger410(org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper bk, org.apache.bk_v4_1_0.bookkeeper.client.LedgerHandle lh) { this.bk = bk; this.lh = lh; } static Ledger410 newLedger() throws Exception { org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper newbk = new org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper(zkUtil.getZooKeeperConnectString()); org.apache.bk_v4_1_0.bookkeeper.client.LedgerHandle newlh = newbk.createLedger(1, 1, org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper.DigestType.CRC32, "foobar".getBytes()); return new Ledger410(newbk, newlh); } static Ledger410 openLedger(long id) throws Exception { org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper newbk = new org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper(zkUtil.getZooKeeperConnectString()); org.apache.bk_v4_1_0.bookkeeper.client.LedgerHandle newlh = newbk.openLedger(id, org.apache.bk_v4_1_0.bookkeeper.client.BookKeeper.DigestType.CRC32, "foobar".getBytes()); return new Ledger410(newbk, newlh); } long getId() { return lh.getId(); } void write100() throws Exception { for (int i = 0; i < 100; i++) { lh.addEntry(ENTRY_DATA); } } long readAll() throws Exception { long count = 0; Enumeration entries = lh.readEntries(0, lh.getLastAddConfirmed()); while (entries.hasMoreElements()) { assertTrue("entry data doesn't match", Arrays.equals(entries.nextElement().getEntry(), ENTRY_DATA)); count++; } return count; } void close() throws Exception { try { if (lh != null) { lh.close(); } } finally { if (bk != null) { bk.close(); } } } } /** * Current verion classes */ static class ServerCurrent { org.apache.bookkeeper.conf.ServerConfiguration conf; org.apache.bookkeeper.proto.BookieServer server = null; ServerCurrent(File journalDir, File ledgerDir, int port) throws Exception { conf = new org.apache.bookkeeper.conf.ServerConfiguration(); conf.setBookiePort(port); conf.setAllowLoopback(true); conf.setZkServers(zkUtil.getZooKeeperConnectString()); conf.setJournalDirName(journalDir.getPath()); conf.setLedgerDirNames(new String[] { ledgerDir.getPath() }); } void start() throws Exception { server = new org.apache.bookkeeper.proto.BookieServer(conf); server.start(); waitUp(conf.getBookiePort()); } org.apache.bookkeeper.conf.ServerConfiguration getConf() { return conf; } void stop() throws Exception { if (server != null) { server.shutdown(); } } } static class LedgerCurrent { org.apache.bookkeeper.client.BookKeeper bk; org.apache.bookkeeper.client.LedgerHandle lh; private LedgerCurrent(org.apache.bookkeeper.client.BookKeeper bk, org.apache.bookkeeper.client.LedgerHandle lh) { this.bk = bk; this.lh = lh; } static LedgerCurrent newLedger() throws Exception { org.apache.bookkeeper.client.BookKeeper newbk = new org.apache.bookkeeper.client.BookKeeper(zkUtil.getZooKeeperConnectString()); org.apache.bookkeeper.client.LedgerHandle newlh = newbk.createLedger(1, 1, org.apache.bookkeeper.client.BookKeeper.DigestType.CRC32, "foobar".getBytes()); return new LedgerCurrent(newbk, newlh); } static LedgerCurrent openLedger(long id) throws Exception { org.apache.bookkeeper.client.BookKeeper newbk = new org.apache.bookkeeper.client.BookKeeper(zkUtil.getZooKeeperConnectString()); org.apache.bookkeeper.client.LedgerHandle newlh = newbk.openLedger(id, org.apache.bookkeeper.client.BookKeeper.DigestType.CRC32, "foobar".getBytes()); return new LedgerCurrent(newbk, newlh); } long getId() { return lh.getId(); } void write100() throws Exception { for (int i = 0; i < 100; i++) { lh.addEntry(ENTRY_DATA); } } long readAll() throws Exception { long count = 0; Enumeration entries = lh.readEntries(0, lh.getLastAddConfirmed()); while (entries.hasMoreElements()) { assertTrue("entry data doesn't match", Arrays.equals(entries.nextElement().getEntry(), ENTRY_DATA)); count++; } return count; } void close() throws Exception { try { if (lh != null) { lh.close(); } } finally { if (bk != null) { bk.close(); } } } } /* * Test old cookie accessing the new version formatted cluster. */ @Test(timeout=60000) public void testOldCookieAccessingNewCluster() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); int port = PortManager.nextFreePort(); // start old server Server410 s410 = new Server410(journalDir, ledgerDir, port); s410.start(); Ledger410 l410 = Ledger410.newLedger(); l410.write100(); l410.getId(); l410.close(); s410.stop(); // Format the metadata using current version ServerCurrent currentServer = new ServerCurrent(journalDir, ledgerDir, port); BookKeeperAdmin.format(new ClientConfiguration(currentServer.conf), false, true); // start the current version server with old version cookie try { currentServer.start(); fail("Bookie should not start with old cookie"); } catch (BookieException e) { assertTrue("Old Cookie should not be able to access", e .getMessage().contains("instanceId")); } finally { currentServer.stop(); } // Format the bookie also and restart assertTrue("Format should be success", Bookie.format(currentServer.conf, false, true)); try { currentServer = null; currentServer = new ServerCurrent(journalDir, ledgerDir, port); currentServer.start(); } finally { if (null != currentServer) { currentServer.stop(); } } } /** * Test compatability between version 4.0.0 and the current version. * Incompatabilities are: * - Current client will not be able to talk to 4.0.0 server. * - 4.0.0 client will not be able to fence ledgers on current server. * - Current server won't start with 4.0.0 server directories without upgrade. */ @Test(timeout=60000) public void testCompat400() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); int port = PortManager.nextFreePort(); // start server, upgrade Server400 s400 = new Server400(journalDir, ledgerDir, port); s400.start(); Ledger400 l400 = Ledger400.newLedger(); l400.write100(); long oldLedgerId = l400.getId(); l400.close(); // Check that current client isn't able to write to old server LedgerCurrent lcur = LedgerCurrent.newLedger(); try { lcur.write100(); fail("Current shouldn't be able to write to 4.0.0 server"); } catch (Exception e) { } lcur.close(); s400.stop(); // Start the current server, will require a filesystem upgrade ServerCurrent scur = new ServerCurrent(journalDir, ledgerDir, port); try { scur.start(); fail("Shouldn't be able to start without directory upgrade"); } catch (Exception e) { } FileSystemUpgrade.upgrade(scur.getConf()); scur.start(); // check that old client can read its old ledgers on new server l400 = Ledger400.openLedger(oldLedgerId); assertEquals(100, l400.readAll()); l400.close(); // check that old client can create ledgers on new server l400 = Ledger400.newLedger(); l400.write100(); l400.close(); // check that current client can read old ledger lcur = LedgerCurrent.openLedger(oldLedgerId); assertEquals(100, lcur.readAll()); lcur.close(); // check that old client can read current client's ledgers lcur = LedgerCurrent.openLedger(oldLedgerId); assertEquals(100, lcur.readAll()); lcur.close(); // check that old client can not fence a current client // due to lack of password lcur = LedgerCurrent.newLedger(); lcur.write100(); long fenceLedgerId = lcur.getId(); try { l400 = Ledger400.openLedger(fenceLedgerId); fail("Shouldn't be able to open ledger"); } catch (Exception e) { // correct behaviour } lcur.write100(); lcur.close(); lcur = LedgerCurrent.openLedger(fenceLedgerId); assertEquals(200, lcur.readAll()); lcur.close(); scur.stop(); } /** * Test compatability between version 4.1.0 and the current version. * - A 4.1.0 client is not able to open a ledger created by the current * version due to a change in the ledger metadata format. * - Otherwise, they should be compatible. */ @Test(timeout=60000) public void testCompat410() throws Exception { File journalDir = File.createTempFile("bookie", "journal"); journalDir.delete(); journalDir.mkdir(); File ledgerDir = File.createTempFile("bookie", "ledger"); ledgerDir.delete(); ledgerDir.mkdir(); int port = PortManager.nextFreePort(); // start server, upgrade Server410 s410 = new Server410(journalDir, ledgerDir, port); s410.start(); Ledger410 l410 = Ledger410.newLedger(); l410.write100(); long oldLedgerId = l410.getId(); l410.close(); // Check that current client can to write to old server LedgerCurrent lcur = LedgerCurrent.newLedger(); lcur.write100(); lcur.close(); s410.stop(); // Start the current server, will not require a filesystem upgrade ServerCurrent scur = new ServerCurrent(journalDir, ledgerDir, port); scur.start(); // check that old client can read its old ledgers on new server l410 = Ledger410.openLedger(oldLedgerId); assertEquals(100, l410.readAll()); l410.close(); // check that old client can create ledgers on new server l410 = Ledger410.newLedger(); l410.write100(); l410.close(); // check that an old client can fence an old client l410 = Ledger410.newLedger(); l410.write100(); Ledger410 l410f = Ledger410.openLedger(l410.getId()); try { l410.write100(); fail("Shouldn't be able to write"); } catch (Exception e) { // correct behaviour } l410f.close(); try { l410.close(); fail("Shouldn't be able to close"); } catch (Exception e) { // correct } // check that a new client can fence an old client // and the old client can continue to read that ledger l410 = Ledger410.newLedger(); l410.write100(); oldLedgerId = l410.getId(); lcur = LedgerCurrent.openLedger(oldLedgerId); try { l410.write100(); fail("Shouldn't be able to write"); } catch (Exception e) { // correct behaviour } try { l410.close(); fail("Shouldn't be able to close"); } catch (Exception e) { // correct } lcur.close(); l410 = Ledger410.openLedger(oldLedgerId); assertEquals(100, l410.readAll()); l410.close(); // check that current client can read old ledger lcur = LedgerCurrent.openLedger(oldLedgerId); assertEquals(100, lcur.readAll()); lcur.close(); // check that old client can read current client's ledgers lcur = LedgerCurrent.openLedger(oldLedgerId); assertEquals(100, lcur.readAll()); lcur.close(); // check that old client can not fence a current client // since it cannot open a new ledger due to the format changes lcur = LedgerCurrent.newLedger(); lcur.write100(); long fenceLedgerId = lcur.getId(); try { l410 = Ledger410.openLedger(fenceLedgerId); fail("Shouldn't be able to open ledger"); } catch (Exception e) { // correct behaviour } lcur.write100(); lcur.close(); scur.stop(); } } TestCallbacks.java000066400000000000000000000047561244507361200340460ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import org.apache.bookkeeper.client.AsyncCallback.AddCallback; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.GenericCallback; import com.google.common.util.concurrent.AbstractFuture; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Callbacks implemented with SettableFuture, to be used in tests */ public class TestCallbacks { private static final Logger logger = LoggerFactory.getLogger(TestCallbacks.class); public static class GenericCallbackFuture extends AbstractFuture implements GenericCallback { @Override public void operationComplete(int rc, T value) { if (rc != BKException.Code.OK) { setException(BKException.create(rc)); } else { set(value); } } } public static class AddCallbackFuture extends AbstractFuture implements AddCallback { private final long expectedEntryId; public AddCallbackFuture(long entryId) { this.expectedEntryId = entryId; } public long getExpectedEntryId() { return expectedEntryId; } @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { logger.info("Add entry {} completed : entryId = {}, rc = {}", new Object[] { expectedEntryId, entryId, rc }); if (rc != BKException.Code.OK) { setException(BKException.create(rc)); } else { set(entryId); } } } } ZooKeeperUtil.java000066400000000000000000000126341244507361200340620ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/test/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.test; import java.io.File; import java.io.IOException; import java.net.InetSocketAddress; import org.apache.bookkeeper.util.ZkUtils; import org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase; import org.apache.commons.io.FileUtils; import java.util.concurrent.CountDownLatch; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.server.NIOServerCnxnFactory; import org.apache.zookeeper.server.ZooKeeperServer; import org.apache.zookeeper.test.ClientBase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import static org.junit.Assert.*; public class ZooKeeperUtil { static final Logger LOG = LoggerFactory.getLogger(ZooKeeperUtil.class); // ZooKeeper related variables protected final static Integer zooKeeperPort = PortManager.nextFreePort(); private final InetSocketAddress zkaddr; protected ZooKeeperServer zks; protected ZooKeeper zkc; // zookeeper client protected NIOServerCnxnFactory serverFactory; protected File ZkTmpDir; private final String connectString; public ZooKeeperUtil() { zkaddr = new InetSocketAddress(zooKeeperPort); connectString= "localhost:" + zooKeeperPort; } public ZooKeeper getZooKeeperClient() { return zkc; } public String getZooKeeperConnectString() { return connectString; } public void startServer() throws Exception { // create a ZooKeeper server(dataDir, dataLogDir, port) LOG.debug("Running ZK server"); // ServerStats.registerAsConcrete(); ClientBase.setupTestEnv(); ZkTmpDir = File.createTempFile("zookeeper", "test"); ZkTmpDir.delete(); ZkTmpDir.mkdir(); zks = new ZooKeeperServer(ZkTmpDir, ZkTmpDir, ZooKeeperServer.DEFAULT_TICK_TIME); serverFactory = new NIOServerCnxnFactory(); serverFactory.configure(zkaddr, 100); serverFactory.startup(zks); boolean b = ClientBase.waitForServerUp(getZooKeeperConnectString(), ClientBase.CONNECTION_TIMEOUT); LOG.debug("Server up: " + b); // create a zookeeper client LOG.debug("Instantiate ZK Client"); ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(10000); zkc = ZkUtils.createConnectedZookeeperClient( getZooKeeperConnectString(), w); // initialize the zk client with values zkc.create("/ledgers", new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); zkc.create("/ledgers/available", new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } public void sleepServer(final int seconds, final CountDownLatch l) throws InterruptedException, IOException { Thread[] allthreads = new Thread[Thread.activeCount()]; Thread.enumerate(allthreads); for (final Thread t : allthreads) { if (t.getName().contains("SyncThread:0")) { Thread sleeper = new Thread() { public void run() { try { t.suspend(); l.countDown(); Thread.sleep(seconds*1000); t.resume(); } catch (Exception e) { LOG.error("Error suspending thread", e); } } }; sleeper.start(); return; } } throw new IOException("ZooKeeper thread not found"); } public void expireSession(ZooKeeper zk) throws Exception { long id = zk.getSessionId(); byte[] password = zk.getSessionPasswd(); ZooKeeperWatcherBase w = new ZooKeeperWatcherBase(10000); ZooKeeper zk2 = new ZooKeeper(getZooKeeperConnectString(), zk.getSessionTimeout(), w, id, password); w.waitForConnection(); zk2.close(); } public void killServer() throws Exception { if (zkc != null) { zkc.close(); } // shutdown ZK server if (serverFactory != null) { serverFactory.shutdown(); assertTrue("waiting for server down", ClientBase.waitForServerDown(getZooKeeperConnectString(), ClientBase.CONNECTION_TIMEOUT)); } if (zks != null) { zks.getTxnLogFactory().close(); } // ServerStats.unregister(); FileUtils.deleteDirectory(ZkTmpDir); } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/util/000077500000000000000000000000001244507361200305255ustar00rootroot00000000000000TestDiskChecker.java000066400000000000000000000062061244507361200343340ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/util/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.bookkeeper.util; import static org.junit.Assert.assertTrue; import java.io.File; import java.io.IOException; import org.apache.bookkeeper.util.DiskChecker.DiskErrorException; import org.apache.bookkeeper.util.DiskChecker.DiskOutOfSpaceException; import org.junit.Before; import org.junit.Test; /** * Test to verify {@link DiskChecker} * */ public class TestDiskChecker { DiskChecker diskChecker; @Before public void setup() { diskChecker = new DiskChecker(0.95f); } /** * Check the disk full */ @Test(expected = DiskOutOfSpaceException.class) public void testCheckDiskFull() throws IOException { File file = File.createTempFile("DiskCheck", "test"); long usableSpace = file.getUsableSpace(); long totalSpace = file.getTotalSpace(); diskChecker .setDiskSpaceThreshold((1f - ((float) usableSpace / (float) totalSpace)) - 0.05f); diskChecker.checkDiskFull(file); } /** * Check disk full on non exist file. in this case it should check for * parent file */ @Test(expected = DiskOutOfSpaceException.class) public void testCheckDiskFullOnNonExistFile() throws IOException { File file = File.createTempFile("DiskCheck", "test"); long usableSpace = file.getUsableSpace(); long totalSpace = file.getTotalSpace(); diskChecker .setDiskSpaceThreshold((1f - ((float) usableSpace / (float) totalSpace)) - 0.05f); assertTrue(file.delete()); diskChecker.checkDiskFull(file); } /** * Check disk error for file */ @Test(expected = DiskErrorException.class) public void testCheckDiskErrorForFile() throws Exception { File parent = File.createTempFile("DiskCheck", "test"); parent.delete(); parent.mkdir(); File child = File.createTempFile("DiskCheck", "test", parent); diskChecker.checkDir(child); } /** * Check disk error for valid dir. */ @Test(timeout=60000) public void testCheckDiskErrorForDir() throws Exception { File parent = File.createTempFile("DiskCheck", "test"); parent.delete(); parent.mkdir(); File child = File.createTempFile("DiskCheck", "test", parent); child.delete(); child.mkdir(); diskChecker.checkDir(child); } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/java/org/apache/bookkeeper/util/TestUtils.java000066400000000000000000000034101244507361200333260ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.bookkeeper.util; import java.io.File; import java.util.HashSet; import java.util.Set; import org.apache.bookkeeper.bookie.Bookie; public class TestUtils { public static boolean hasLogFiles(File ledgerDirectory, boolean partial, Integer... logsId) { boolean result = partial ? false : true; Set logs = new HashSet(); for (File file : Bookie.getCurrentDirectory(ledgerDirectory).listFiles()) { if (file.isFile()) { String name = file.getName(); if (!name.endsWith(".log")) { continue; } logs.add(Integer.parseInt(name.split("\\.")[0], 16)); } } for (Integer logId : logsId) { boolean exist = logs.contains(logId); if ((partial && exist) || (!partial && !exist)) { return !result; } } return result; } } bookkeeper-release-4.2.4/bookkeeper-server/src/test/resources/000077500000000000000000000000001244507361200245035ustar00rootroot00000000000000bookkeeper-release-4.2.4/bookkeeper-server/src/test/resources/log4j.properties000066400000000000000000000052301244507361200276400ustar00rootroot00000000000000# # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # # # Bookkeeper Logging Configuration # # Format is " (, )+ # DEFAULT: console appender only log4j.rootLogger=INFO, CONSOLE # Example with rolling log file #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE # Example with rolling log file and tracing #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE log4j.logger.org.apache.zookeeper=ERROR # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=INFO log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # # Add ROLLINGFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender log4j.appender.ROLLINGFILE.Threshold=DEBUG log4j.appender.ROLLINGFILE.File=bookkeeper-server.log log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # Max log file size of 10MB log4j.appender.ROLLINGFILE.MaxFileSize=10MB # uncomment the next line to limit number of backup files #log4j.appender.ROLLINGFILE.MaxBackupIndex=10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=bookkeeper_trace.log log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n bookkeeper-release-4.2.4/compat-deps/000077500000000000000000000000001244507361200175055ustar00rootroot00000000000000bookkeeper-release-4.2.4/compat-deps/bookkeeper-server-compat-4.0.0/000077500000000000000000000000001244507361200250555ustar00rootroot00000000000000bookkeeper-release-4.2.4/compat-deps/bookkeeper-server-compat-4.0.0/pom.xml000066400000000000000000000061611244507361200263760ustar00rootroot00000000000000 4.0.0 compat-deps org.apache.bookkeeper 4.2.4 org.apache.bookkeeper bookkeeper-server-compat400 4.0.0 bookkeeper-server-compat400 http://maven.apache.org UTF-8 org.apache.bookkeeper bookkeeper-server 4.0.0 org.apache.maven.plugins maven-shade-plugin 1.5 package shade false org.apache.*:* org.jboss.*:* commons-*:* commons-beanutils*:commons-beanutils* org.apache org.apache.bk_v4_0_0 org.apache.log4j org.jboss org.jboss.bk_v4_0_0 bookkeeper-release-4.2.4/compat-deps/bookkeeper-server-compat-4.1.0/000077500000000000000000000000001244507361200250565ustar00rootroot00000000000000bookkeeper-release-4.2.4/compat-deps/bookkeeper-server-compat-4.1.0/pom.xml000066400000000000000000000071261244507361200264010ustar00rootroot00000000000000 4.0.0 compat-deps org.apache.bookkeeper 4.2.4 org.apache.bookkeeper bookkeeper-server-compat410 4.1.0 bookkeeper-server-compat410 http://maven.apache.org UTF-8 org.apache.bookkeeper bookkeeper-server 4.1.0 org.apache.maven.plugins maven-shade-plugin 1.5 package shade false org.apache.*:* org.jboss.*:* commons-*:* commons-beanutils*:commons-beanutils* org.apache.commons org.apache.bk_v4_1_0.commons org.apache.bookkeeper org.apache.bk_v4_1_0.bookkeeper org.apache.zookeeper org.apache.bk_v4_1_0.bookkeeper org.apache.jute org.apache.bk_v4_1_0.jute org.jboss org.jboss.bk_v4_1_0 bookkeeper-release-4.2.4/compat-deps/hedwig-server-compat-4.0.0/000077500000000000000000000000001244507361200241765ustar00rootroot00000000000000bookkeeper-release-4.2.4/compat-deps/hedwig-server-compat-4.0.0/pom.xml000066400000000000000000000074121244507361200255170ustar00rootroot00000000000000 4.0.0 compat-deps org.apache.bookkeeper 4.2.4 org.apache.bookkeeper hedwig-server-compat400 4.0.0 hedwig-server-compat400 http://maven.apache.org UTF-8 org.apache.bookkeeper hedwig-server 4.0.0 org.apache.maven.plugins maven-shade-plugin 1.5 package shade false org.apache.*:* org.jboss.*:* commons-*:* commons-beanutils*:commons-beanutils* org.apache.commons org.apache.hw_v4_0_0.commons org.apache.bookkeeper org.apache.hw_v4_0_0.bookkeeper org.apache.zookeeper org.apache.hw_v4_0_0.zookkeeper org.apache.hedwig org.apache.hw_v4_0_0.hedwig org.apache.jute org.apache.hw_v4_0_0.jute org.jboss org.jboss.hw_v4_0_0 bookkeeper-release-4.2.4/compat-deps/hedwig-server-compat-4.1.0/000077500000000000000000000000001244507361200241775ustar00rootroot00000000000000bookkeeper-release-4.2.4/compat-deps/hedwig-server-compat-4.1.0/pom.xml000066400000000000000000000074121244507361200255200ustar00rootroot00000000000000 4.0.0 compat-deps org.apache.bookkeeper 4.2.4 org.apache.bookkeeper hedwig-server-compat410 4.1.0 hedwig-server-compat410 http://maven.apache.org UTF-8 org.apache.bookkeeper hedwig-server 4.1.0 org.apache.maven.plugins maven-shade-plugin 1.5 package shade false org.apache.*:* org.jboss.*:* commons-*:* commons-beanutils*:commons-beanutils* org.apache.commons org.apache.hw_v4_1_0.commons org.apache.bookkeeper org.apache.hw_v4_1_0.bookkeeper org.apache.zookeeper org.apache.hw_v4_1_0.zookkeeper org.apache.hedwig org.apache.hw_v4_1_0.hedwig org.apache.jute org.apache.hw_v4_1_0.jute org.jboss org.jboss.hw_v4_1_0 bookkeeper-release-4.2.4/compat-deps/pom.xml000066400000000000000000000034111244507361200210210ustar00rootroot00000000000000 bookkeeper org.apache.bookkeeper 4.2.4 4.0.0 org.apache.bookkeeper 4.2.4 compat-deps pom compability dependencies bookkeeper-server-compat-4.0.0 bookkeeper-server-compat-4.1.0 hedwig-server-compat-4.0.0 hedwig-server-compat-4.1.0 UTF-8 UTF-8 bookkeeper-release-4.2.4/doc/000077500000000000000000000000001244507361200160365ustar00rootroot00000000000000bookkeeper-release-4.2.4/doc/bookieConfigParams.textile000066400000000000000000000167001244507361200232040ustar00rootroot00000000000000Title: Bookie Configuration Parameters Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Bookie Configuration Parameters This page contains detailed information about configuration parameters used for configuring a bookie server. There is an example in "bookkeeper-server/conf/bk_server.conf". h2. Server parameters | @bookiePort@ |Port that bookie server listens on. The default value is 3181.| | @journalDirectory@ | Directory to which Bookkeeper outputs its write ahead log, ideally on a dedicated device. The default value is "/tmp/bk-txn". | | @ledgerDirectories@ | Directory to which Bookkeeper outputs ledger snapshots. Multiple directories can be defined, separated by comma, e.g. /tmp/bk1-data,/tmp/bk2-data. Ideally ledger dirs and journal dir are each on a different device, which reduces the contention between random I/O and sequential writes. It is possible to run with a single disk, but performance will be significantly lower.| | @logSizeLimit@ | Maximum file size of entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation. The default value is 2GB. | | @journalMaxSizeMB@ | Maximum file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation. The default value is 2kB. | | @journalMaxBackups@ | Max number of old journal file to keep. Keeping a number of old journal files might help data recovery in some special cases. The default value is 5. | | @gcWaitTime@ | Interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in the background, running the garbage collector too frequently hurts performance. It is best to set its value high enough if there is sufficient disk capacity.| | @flushInterval@ | Interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce random disk I/O. Consequently, it is important to have journal dir and ledger dirs each on different devices. However, if it necessary to have journal dir and ledger dirs on the same device, one option is to increment the flush interval to get higher performance. Upon a failure, the bookie will take longer to recover. | | @bookieDeathWatchInterval@ | Interval to check whether a bookie is dead or not, in milliseconds. | h2. NIO server settings | @serverTcpNoDelay@ | This settings is used to enabled/disabled Nagle's algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance. Default value is true. | h2. Ledger cache settings | @openFileLimit@ | Maximum number of ledger index files that can be opened in a bookie. If the number of ledger index files reaches this limit, the bookie starts to flush some ledger indexes from memory to disk. If flushing happens too frequently, then performance is affected. You can tune this number to improve performance according. | | @pageSize@ | Size of an index page in ledger cache, in bytes. A larger index page can improve performance when writing page to disk, which is efficient when you have small number of ledgers and these ledgers have a similar number of entries. With a large number of ledgers and a few entries per ledger, a smaller index page would improves memory usage. | | @pageLimit@ | Maximum number of index pages to store in the ledger cache. If the number of index pages reaches this limit, bookie server starts to flush ledger indexes from memory to disk. Incrementing this value is an option when flushing becomes frequent. It is important to make sure, though, that pageLimit*pageSize is not more than JVM max memory limit; otherwise it will raise an OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain better performance in the case of a large number of ledgers with few entries per ledger. If pageLimit is -1, a bookie uses 1/3 of the JVM memory to compute the maximum number of index pages. | h2. Ledger manager settings | @ledgerManagerType@ | What kind of ledger manager is used to manage how ledgers are stored, managed and garbage collected. See "BookKeeper Internals":./bookkeeperInternals.html for detailed info. Default is flat. | | @zkLedgersRootPath@ | Root zookeeper path to store ledger metadata. Default is /ledgers. | h2. Entry Log compaction settings | @minorCompactionInterval@ | Interval to run minor compaction, in seconds. If it is set to less than or equal to zero, then minor compaction is disabled. Default is 1 hour. | | @minorCompactionThreshold@ | Entry log files with remaining size under this threshold value will be compacted in a minor compaction. If it is set to less than or equal to zero, the minor compaction is disabled. Default is 0.2 | | @majorCompactionInterval@ | Interval to run major compaction, in seconds. If it is set to less than or equal to zero, then major compaction is disabled. Default is 1 day. | | @majorCompactionThreshold@ | Entry log files with remaining size below this threshold value will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold value will never be compacted. If it is set to less than or equal to zero, the major compaction is disabled. Default is 0.8. | h2. Statistics | @enableStatistics@ | Enables the collection of statistics. Default is on. | h2. Auto-replication | @openLedgerRereplicationGracePeriod@ | This is the grace period which the rereplication worker waits before fencing and replicating a ledger fragment which is still being written to upon a bookie failure. The default is 30s. | h2. Read-only mode support | @readOnlyModeEnabled@ | Enables/disables the read-only Bookie feature. A bookie goes into read-only mode when it finds integrity issues with stored data. If @readOnlyModeEnabled@ is false, the bookie shuts down if it finds integrity issues. By default it is enabled. | h2. Disk utilization | @diskUsageThreshold@ | Fraction of the total utilized usable disk space to declare the disk full. The total available disk space is obtained with File.getUsableSpace(). Default is 0.95. | | @diskCheckInterval@ | Interval between consecutive checks of disk utilization. Default is 10s. | h2. ZooKeeper parameters | @zkServers@ | A list of one or more servers on which zookeeper is running. The server list is comma separated, e.g., zk1:2181,zk2:2181,zk3:2181 | | @zkTimeout@ | ZooKeeper client session timeout in milliseconds. Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout. JVM garbage collection or disk I/O can cause SESSION_EXPIRED. Increment this value could help avoiding this issue. The default value is 10,000. |bookkeeper-release-4.2.4/doc/bookieRecovery.textile000066400000000000000000000152761244507361200224400ustar00rootroot00000000000000Title: BookKeeper Bookie Recovery Notice: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at . http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. h1. Bookie Ledger Recovery p. When a Bookie crashes, any ledgers with data on that Bookie become underreplicated. There are two options for bringing the ledgers back to full replication, Autorecovery and Manual Bookie Recovery. h2. Autorecovery p. Autorecovery runs as a daemon alongside the Bookie daemon on each Bookie. Autorecovery detects when a bookie in the cluster has become unavailable, and rereplicates all the ledgers which were on that bookie, so that those ledgers are brough back to full replication. See the "Admin Guide":./bookkeeperConfig.html for instructions on how to start autorecovery. h2. Manual Bookie Recovery p. If autorecovery is not enabled, it is possible for the adminisatrator to manually rereplicate the data from the failed bookie. To run recovery, with zk1.example.com as the zookeeper ensemble, and 192.168.1.10 as the failed bookie, do the following: @bookkeeper-server/bin/bookkeeper org.apache.bookkeeper.tools.BookKeeperTools zk1.example.com:2181 192.168.1.10:3181@ It is necessary to specify the host and port portion of failed bookie, as this is how it identifies itself to zookeeper. It is possible to specify a third argument, which is the bookie to replicate to. If this is omitted, as in our example, a random bookie is chosen for each ledger segment. A ledger segment is a continuous sequence of entries in a bookie, which share the same ensemble. h2. AutoRecovery Internals Auto-Recovery has two components: * *Auditor*, a singleton node which watches for bookie failure, and creates rereplication tasks for the ledgers on failed bookies. * *ReplicationWorker*, runs on each Bookie, takes rereplication tasks and executes them. Both the components run as threads in the the *AutoRecoveryMain* process. The *AutoRecoveryMain* process runs on each Bookie in the cluster. All recovery nodes will participate in leader election to decide which node becomes the auditor. Those which fail to become the auditor will watch the elected auditor, and will run election again if they see that it has failed. h3. Auditor The auditor watches the the list of bookies registered with ZooKeeper in the cluster. A Bookie registers with ZooKeeper during startup. If the bookie crashes or is killed, the bookie's registration disappears. The auditor is notified of changes in the registered bookies list. When the auditor sees that a bookie has disappeared from the list, it immediately scans the complete ledger list to find ledgers which have stored data on the failed bookie. Once it has a list of ledgers which need to be rereplicated, it will publish a rereplication task for each ledger under the /underreplicated/ znode in ZooKeeeper. h3. ReplicationWorker Each replication worker watches for tasks being published in the /underreplicated/ znode. When a new task appears, it will try to get a lock on it. If it cannot acquire the lock, it tries the next entry. The locks are implemented using ZooKeeper ephemeral znodes. The replication worker will scan through the rereplication task's ledger for segments of which its local bookie is not a member. When it finds segments matching this criteria it will replicate the entries of that segment to the local bookie. If, after this process, the ledger is fully replicated, the ledgers entry under /underreplicated/ is deleted, and the lock is released. If there is a problem replicating, or there are still segments in the ledger which are still underreplicated (due to the local bookie already being part of the ensemble for the segment), then the lock is simply released. If the replication worker finds a segment which needs rereplication, but does not have a defined endpoint (i.e. the final segment of a ledger currently being written to), it will wait for a grace period before attempting rereplication. If the segment needing rereplciation still does not have a defined endpoint, the ledger is fenced and rereplication then takes place. This avoids the case where a client is writing to a ledger, and one of the bookies goes down, but the client has not written an entry to that bookie before rereplication takes place. The client could continue writing to the old segment, even though the ensemble for the segment had changed. This could lead to data loss. Fencing prevents this scenario from happening. In the normal case, the client will try to write to the failed bookie within the grace period, and will have started a new segment before rereplication starts. See the "Admin Guide":./bookkeeperConfig.html for how to configure this grace period. h2. The Rereplication process The ledger rereplication process is as follows. # The client goes through all ledger segments in the ledger, selecting those which contain the failed bookie; # A recovery process is initiated for each ledger segment in this list; ## The client selects a bookie to which all entries in the ledger segment will be replicated; In the case of autorecovery, this will always be the local bookie; ## the client reads entries that belong to the ledger segment from other bookies in the ensemble and writes them to the selected bookie; ## Once all entries have been replicated, the zookeeper metadata for the segment is updated to reflect the new ensemble; ## The segment is marked as fully replicated in the recovery tool; # Once all ledger segments are marked as fully replicated, the ledger is marked as fully replicated. h2. The Manual Bookie Recovery process The manual bookie recovery process is as follows. # The client reads the metadata of active ledgers from zookeeper; # From this, the ledgers which contain segments using the failed bookie in their ensemble are selected; # A recovery process is initiated for each ledger in this list; ## The Ledger rereplication process is run for each ledger; # Once all ledgers are marked as fully replicated, bookie recovery is finished. bookkeeper-release-4.2.4/doc/bookkeeperConfig.textile000066400000000000000000000270461244507361200227230ustar00rootroot00000000000000Title: BookKeeper Administrator's Guide Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . . h1. Abstract This document contains information about deploying, administering and maintaining BookKeeper. It also discusses best practices and common problems. h1. Running a BookKeeper instance h2. System requirements A typical BookKeeper installation comprises a set of bookies and a set of ZooKeeper replicas. The exact number of bookies depends on the quorum mode, desired throughput, and number of clients using this installation simultaneously. The minimum number of bookies is three for self-verifying (stores a message authentication code along with each entry) and four for generic (does not store a message authentication code with each entry), and there is no upper limit on the number of bookies. Increasing the number of bookies will, in fact, enable higher throughput. For performance, we require each server to have at least two disks. It is possible to run a bookie with a single disk, but performance will be significantly lower in this case. For ZooKeeper, there is no constraint with respect to the number of replicas. Having a single machine running ZooKeeper in standalone mode is sufficient for BookKeeper. For resilience purposes, it might be a good idea to run ZooKeeper in quorum mode with multiple servers. Please refer to the ZooKeeper documentation for detail on how to configure ZooKeeper with multiple replicas. h2. Starting and Stopping Bookies To *start* a bookie, execute the following command: * To run a bookie in the foreground: @bookkeeper-server/bin/bookkeeper bookie@ * To run a bookie in the background: @bookkeeper-server/bin/bookkeeper-daemon.sh start bookie@ The configuration parameters can be set in bookkeeper-server/conf/bk_server.conf. The important parameters are: * @bookiePort@, Port number that the bookie listens on; * @zkServers@, Comma separated list of ZooKeeper servers with a hostname:port format; * @journalDir@, Path for Log Device (stores bookie write-ahead log); * @ledgerDir@, Path for Ledger Device (stores ledger entries); Ideally, @journalDir@ and @ledgerDir@ are each in a different device. See "Bookie Configuration Parameters":./bookieConfigParams.html for a full list of configuration parameters. To *stop* a bookie running in the background, execute the following command: @bookkeeper-server/bin/bookkeeper-daemon.sh stop bookie [-force]@ @-force@ is optional, which is used to stop the bookie forcefully, if the bookie server is not stopped gracefully within the _BOOKIE_STOP_TIMEOUT_ (environment variable), which is 30 seconds, by default. h3. Upgrading From time to time, we may make changes to the filesystem layout of the bookie, which are incompatible with previous versions of bookkeeper and require that directories used with previous versions are upgraded. If you upgrade your bookkeeper software, and an upgrade is required, then the bookie will fail to start and print an error such as: @2012-05-25 10:41:50,494 - ERROR - [main:Bookie@246] - Directory layout version is less than 3, upgrade needed@ BookKeeper provides a utility for upgrading the filesystem. @bookkeeper-server/bin/bookkeeper upgrade@ The upgrade application takes 3 possible switches, @--upgrade@, @--rollback@ or @--finalize@. A normal upgrade process looks like. # @bookkeeper-server/bin/bookkeeper upgrade --upgrade@ # @bookkeeper-server/bin/bookkeeper bookie@ # Check everything is working. Kill bookie, ^C # If everything is ok, @bookkeeper-server/bin/bookkeeper upgrade --finalize@ # Start bookie again @bookkeeper-server/bin/bookkeeper bookie@ # If something is amiss, you can roll back the upgrade @bookkeeper-server/bin/bookkeeper upgrade --rollback@ h3. Formatting To format the bookie metadata in Zookeeper, execute the following command once: @bookkeeper-server/bin/bookkeeper shell metaformat [-nonInteractive] [-force]@ To format the bookie local filesystem data, execute the following command on each bookie node: @bookkeeper-server/bin/bookkeeper shell bookieformat [-nonInteractive] [-force]@ The @-nonInteractive@ and @-force@ switches are optional. If @-nonInteractive@ is set, the user will not be asked to confirm the format operation if old data exists. If it exists, the format operation will abort, unless the @-force@ switch has been specified, in which case it will process. By default, the user will be prompted to confirm the format operation if old data exists. h3. Logging BookKeeper uses "slf4j":http://www.slf4j.org for logging, with the log4j bindings enabled by default. To enable logging from a bookie, create a log4j.properties file and point the environment variable BOOKIE_LOG_CONF to the configuration file. The path to the log4j.properties file must be absolute. @export BOOKIE_LOG_CONF=/tmp/log4j.properties@ @bookkeeper-server/bin/bookkeeper bookie@ h3. Missing disks or directories Replacing disks or removing directories accidentally can cause a bookie to fail while trying to read a ledger fragment which the ledger metadata has claimed exists on the bookie. For this reason, when a bookie is started for the first time, it's disk configuration is fixed for the lifetime of that bookie. Any change to the disk configuration of the bookie, such as a crashed disk or an accidental configuration change, will result in the bookie being unable to start with the following error: @2012-05-29 18:19:13,790 - ERROR - [main:BookieServer@314] - Exception running bookie server : @ @org.apache.bookkeeper.bookie.BookieException$InvalidCookieException@ @.......at org.apache.bookkeeper.bookie.Cookie.verify(Cookie.java:82)@ @.......at org.apache.bookkeeper.bookie.Bookie.checkEnvironment(Bookie.java:275)@ @.......at org.apache.bookkeeper.bookie.Bookie.(Bookie.java:351)@ If the change was the result of an accidental configuration change, the change can be reverted and the bookie can be restarted. However, if the change cannot be reverted, such as is the case when you want to add a new disk or replace a disk, the bookie must be wiped and then all its data re-replicated onto it. To do this, do the following: # Increment the _bookiePort_ in _bk_server.conf_. # Ensure that all directories specified by _journalDirectory_ and _ledgerDirectories_ are empty. # Start the bookie. # Run @bin/bookkeeper org.apache.bookkeeper.tools.BookKeeperTools @ to re-replicate data. and are identified by their external IP and bookiePort. For example if this process is being run on a bookie with an external IP of 192.168.1.10, with an old _bookiePort_ of 3181 and a new _bookiePort_ of 3182, and with zookeeper running on _zk1.example.com_, the command to run would be
@bin/bookkeeper org.apache.bookkeeper.tools.BookKeeperTools zk1.example.com 192.168.1.10:3181 192.168.1.10:3182@. See "Bookie Recovery":./bookieRecovery.html for more details on the re-replication process. The mechanism to prevent the bookie from starting up in the case of configuration changes exists to prevent the following silent failures: # A strict subset of the ledger devices (among multiple ledger devices) has been replaced, consequently making the content of the replaced devices unavailable; # A strict subset of the ledger directories has been accidentally deleted. h3. Full or failing disks A bookie can go into read-only mode if it detects problems with its disks. In read-only mode, the bookie will serve read requests, but will not allow any writes. Any ledger currently writing to the bookie will replace the bookie in its ensemble. No new ledgers will select the read-only bookie for writing. The bookie goes into read-only mode in the following conditions. # All disks are full. # An error occurred flushing to the ledger disks. # An error occurred writing to the journal disk. Important parameters are: * @readOnlyModeEnabled@, whether read-only mode is enabled. If read-only mode is not enabled, the bookie will shutdown on encountering any of the above conditions. By default, read-only mode is disabled. * @diskUsageThreshold@, percentage threshold at which a disk will be considered full. This value must be between 0 and 1.0. By default, the value is 0.95. * @diskCheckInterval@, interval at which the disks are checked to see if they are full. Specified in milliseconds. By default the check occurs every 10000 milliseconds (10 seconds). h2. Running Autorecovery nodes To run autorecovery nodes, we execute the following command in every Bookie node: @bookkeeper-server/bin/bookkeeper autorecovery@ Configuration parameters for autorecovery can be set in *bookkeeper-server/conf/bk_server.conf*. Important parameters are: * @auditorPeriodicCheckInterval@, interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day. * @rereplicationEntryBatchSize@ specifies the number of entries which a replication will rereplicate in parallel. The default value is 10. A larger value for this parameter will increase the speed at which autorecovery occurs but will increate the memory requirement of the autorecovery process, and create more load on the cluster. * @openLedgerRereplicationGracePeriod@, is the amount of time, in milliseconds, which a recovery worker will wait before recovering a ledger segment which has no defined ended, i.e. the client is still writing to that segment. If the client is still active, it should detect the bookie failure, and start writing to a new ledger segment, and a new ensemble, which doesn't include the failed bookie. Creating new ledger segment will define the end of the previous segment. If, after the grace period, the ledger segment's end has not been defined, we assume the writing client has crashed. The ledger is fenced and the client is blocked from writing any more entries to the ledger. The default value is 30000ms. h3. Disabling Autorecovery during maintenance It is useful to disable autorecovery during maintenance, for example, to avoid a Bookie's data being unnecessarily rereplicated when it is only being taken down for a short period to update the software, or change the configuration. To disable autorecovery, run: @bookkeeper-server/bin/bookkeeper shell autorecovery -disable@ To reenable, run: @bookkeeper-server/bin/bookkeeper shell autorecovery -enable@ Autorecovery enable/disable only needs to be run once for the whole cluster, and not individually on each Bookie in the cluster. h2. Setting up a test ensemble Sometimes it is useful to run a ensemble of bookies on your local machine for testing. We provide a utility for doing this. It will set up N bookies, and a zookeeper instance locally. The data on these bookies and of the zookeeper instance are not persisted over restarts, so obviously this should never be used in a production environment. To run a test ensemble of 10 bookies, do the following: @bookkeeper-server/bin/bookkeeper localbookie 10@ bookkeeper-release-4.2.4/doc/bookkeeperConfigParams.textile000066400000000000000000000056641244507361200240710ustar00rootroot00000000000000Title: BookKeeper Configuration Parameters Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. BookKeeper Configuration Parameters This page contains detailed information about configuration parameters used for configuring a BookKeeper client. h3. General parameters | @zkServers@ | A list of one of more servers on which zookeeper is running. The server list can be comma separated values, e.g., zk1:2181,zk2:2181,zk3:2181 | | @zkTimeout@ | ZooKeeper client session timeout in milliseconds. The default value is 10,000. | | @throttle@ | A throttle value is used to prevent running out of memory when producing too many requests than the capability of bookie servers can handle. The default is 5,000. | | @readTimeout@ | This is the number of seconds bookkeeper client wait without hearing a response from a bookie before client consider it failed. The default is 5 seconds. | | @numWorkerThreads@ | This is the number of worker threads used by bookkeeper client to submit operations. The default value is the number of available processors. | h3. NIO server settings | @clientTcpNoDelay@ | This settings is used to enabled/disabled Nagle's algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance. Default value is true. | h3. Ledger manager settings | @ledgerManagerType@ | This parameter determines the type of ledger manager used to manage how ledgers are stored, manipulated, and garbage collected. See "BookKeeper Internals":./bookkeeperInternals.html for detailed info. Default value is flat. | | @zkLedgersRootPath@ | Root zookeeper path to store ledger metadata. Default is /ledgers. | h3. Bookie recovery settings Currently bookie recovery tool needs a digest type and passwd to open ledgers to do recovery. Currently, bookkeeper assumes that all ledgers were created with the same DigestType and Password. In the future, it needs to know for each ledger, what was the DigestType and Password used to create it before opening it. | @digestType@ | Digest type used to open ledgers from bookkie recovery tool. | | @passwd@ | Password used to open ledgers from bookie recovery tool. | bookkeeper-release-4.2.4/doc/bookkeeperInternals.textile000066400000000000000000000131201244507361200234410ustar00rootroot00000000000000Title: BookKeeper Internals Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h2. Bookie Internals p. Bookie server stores its data in multiple ledger directories and its journal files in a journal directory. Ideally, storing journal files in a separate directory than data files would increase throughput and decrease latency h3. The Bookie Journal p. Journal directory has one kind of file in it: * @{timestamp}.txn@ - holds transactions executed in the bookie server. p. Before persisting ledger index and data to disk, a bookie ensures that the transaction that represents the update is written to a journal in non-volatile storage. A new journal file is created using current timestamp when a bookie starts or an old journal file reaches its maximum size. p. A bookie supports journal rolling to remove old journal files. In order to remove old journal files safely, bookie server records LastLogMark in Ledger Device, which indicates all updates (including index and data) before LastLogMark has been persisted to the Ledger Device. p. LastLogMark contains two parts: * @LastLogId@ - indicates which journal file the transaction persisted. * @LastLogPos@ - indicates the position the transaction persisted in LastLogId journal file. p. You may use following settings to further fine tune the behavior of journalling on bookies: | @journalMaxSizeMB@ | journal file size limitation. when a journal reaches this limitation, it will be closed and new journal file be created. | | @journalMaxBackups@ | how many old journal files whose id is less than LastLogMark 's journal id. | bq. NOTE: keeping number of old journal files would be useful for manually recovery in special case. h1. ZooKeeper Metadata p. For BookKeeper, we require a ZooKeeper installation to store metadata, and to pass the list of ZooKeeper servers as parameter to the constructor of the BookKeeper class (@org.apache.bookkeeper.client.BookKeeper@). To setup ZooKeeper, please check the "ZooKeeper documentation":http://zookeeper.apache.org/doc/trunk/index.html. p. BookKeeper provides two mechanisms to organize its metadata in ZooKeeper. By default, the @FlatLedgerManager@ is used, and 99% of users should never need to look at anything else. However, in cases where there are a lot of active ledgers concurrently, (> 50,000), @HierarchicalLedgerManager@ should be used. For so many ledgers, a hierarchical approach is needed due to a limit ZooKeeper places on packet sizes "JIRA Issue":https://issues.apache.org/jira/browse/BOOKKEEPER-39. | @FlatLedgerManager@ | All ledger metadata are placed as children in a single zookeeper path. | | @HierarchicalLedgerManager@ | All ledger metadata are partitioned into 2-level znodes. | h2. Flat Ledger Manager p. All ledgers' metadata are put in a single zookeeper path, created using zookeeper sequential node, which can ensure uniqueness of ledger id. Each ledger node is prefixed with 'L'. p. Bookie server manages its owned active ledgers in a hash map. So it is easy for bookie server to find what ledgers are deleted from zookeeper and garbage collect them. And its garbage collection flow is described as below: * Fetch all existing ledgers from zookeeper (@zkActiveLedgers@). * Fetch all ledgers currently active within the Bookie (@bkActiveLedgers@). * Loop over @bkActiveLedgers@ to find those ledgers which do not exist in @zkActiveLedgers@ and garbage collect them. h2. Hierarchical Ledger Manager p. @HierarchicalLedgerManager@ first obtains a global unique id from ZooKeeper using a EPHEMERAL_SEQUENTIAL znode. p. Since ZooKeeper sequential counter has a format of %10d -- that is 10 digits with 0 (zero) padding, i.e. "<path>0000000001", @HierarchicalLedgerManager@ splits the generated id into 3 parts : @{level1 (2 digits)}{level2 (4 digits)}{level3 (4 digits)}@ p. These 3 parts are used to form the actual ledger node path used to store ledger metadata: @{ledgers_root_path}/{level1}/{level2}/L{level3}@ p. E.g. Ledger 0000000001 is split into 3 parts 00, 0000, 00001, which is stored in znode /{ledgers_root_path}/00/0000/L0001. So each znode could have at most 10000 ledgers, which avoids the problem of the child list being larger than the maximum ZooKeeper packet size. p. Bookie server manages its active ledgers in a sorted map, which simplifies access to active ledgers in a particular (level1, level2) partition. p. Garbage collection in bookie server is processed node by node as follows: * Fetching all level1 nodes, by calling zk#getChildren(ledgerRootPath). ** For each level1 nodes, fetching their level2 nodes : ** For each partition (level1, level2) : *** Fetch all existed ledgers from zookeeper belonging to partition (level1, level2) (@zkActiveLedgers@). *** Fetch all ledgers currently active in the bookie which belong to partition (level1, level2) (@bkActiveLedgers@). *** Loop over @bkActiveLedgers@ to find those ledgers which do not exist in @zkActiveLedgers@, and garbage collect them. bq. NOTE: Hierarchical Ledger Manager is more suitable to manage large number of ledgers existed in BookKeeper. bookkeeper-release-4.2.4/doc/bookkeeperJMX.textile000066400000000000000000000043071244507361200221470ustar00rootroot00000000000000Title: BookKeeper JMX Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . . h1. JMX Apache BookKeeper has extensive support for JMX, which allows viewing and managing a BookKeeper cluster. This document assumes that you have basic knowledge of JMX. See "Sun JMX Technology":http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/ page to get started with JMX. See the "JMX Management Guide":http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html for details on setting up local and remote management of VM instances. By default the included __bookkeeper__ script supports only local management - review the linked document to enable support for remote management (beyond the scope of this document). __Bookie Server__ is a JMX manageable server, which registers the proper MBeans during initialization to support JMX monitoring and management of the instance. h1. Bookie Server MBean Reference This table details JMX for a bookie server. | _.MBean | _.MBean Object Name | _.Description | | BookieServer | BookieServer_ | Represents a bookie server. Note that the object name includes bookie port that the server listens on. It is the root MBean for bookie server, which includes statistics for a bookie server. E.g. number packets sent/received, and statistics for add/read operations. | | Bookie | Bookie | Provide bookie statistics. Currently it just returns current journal queue length waiting to be committed. | | LedgerCache | LedgerCache | Provide ledger cache statistics. E.g. number of page cached in page cache, number of files opened for ledger index files. | bookkeeper-release-4.2.4/doc/bookkeeperMetadata.textile000066400000000000000000000106611244507361200232310ustar00rootroot00000000000000Title: BookKeeper Metadata Management Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . . h1. Metadata Management There are two kinds of metadata needs to be managed in BookKeeper: one is the __list of available bookies__, which is used to track server availability (ZooKeeper is designed naturally for this); while the other is __ledger metadata__, which could be handle by different kinds of key/value storages efficiently with __CAS (Compare And Set)__ semantics. __Ledger metadata__ is handled by __LedgerManager__ and can be plugged with various storage mediums. h2. Ledger Metadata Management The operations on the metadata of a ledger are quite straightforward. They are: * @createLedger@: create an new entry to store given ledger metadata. A unique id should be generated as the ledger id for the new ledger. * @removeLedgerMetadata@: remove the entry of a ledger from metadata store. A __Version__ object is provided to do conditional remove. If given __Version__ object doesn't match current __Version__ in metadata store, __MetadataVersionException__ should be thrown to indicate version confliction. __NoSuchLedgerExistsException__ should be returned if the ledger metadata entry doesn't exists. * @readLedgerMetadata@: read the metadata of a ledger from metadata store. The new __version__ should be set to the returned __LedgerMetadata__ object. __NoSuchLedgerExistsException__ should be returned if the entry of the ledger metadata doesn't exists. * @writeLedgerMetadata@: update the metadata of a ledger matching the given __Version__. The update should be rejected and __MetadataVersionException__ should be returned whe then given __Version__ doesn't match the current __Version__ in metadata store. __NoSuchLedgerExistsException__ should be returned if the entry of the ledger metadata doesn't exists. The version of the __LedgerMetadata__ object should be set to the new __Version__ generated by applying this update. * @asyncProcessLedgers@: loops through all existed ledgers in metadata store and applies a __Processor__. The __Processor__ provided is executed for each ledger. If a failure happens during iteration, the iteration should be teminated and __final callback__ triggered with failure. Otherwise, __final callback__ is triggered after all ledgers are processed. No ordering nor transactional guarantees need to be provided for in the implementation of this interface. * @getLedgerRanges@: return a list of ranges for ledgers in the metadata store. The ledger metadata itself does not need to be fetched. Only the ledger ids are needed. No ordering is required, but there must be no overlap between ledger ranges and each ledger range must be contain all the ledgers in the metadata store between the defined endpoint (i.e. a ledger range [x, y], all ledger ids larger or equal to x and smaller or equal to y should exist only in this range). __getLedgerRanges__ is used in the __ScanAndCompare__ gc algorithm. h1. How to choose a metadata storage medium for BookKeeper. From the interface, several requirements need to met before choosing a metadata storage medium for BookKeeper: * @Check and Set (CAS)@: The ability to do strict update according to specific conditional. Etc, a specific version (ZooKeeper) and same content (HBase). * @Optimized for Writes@: The metadata access pattern for BookKeeper is read first and continuous updates. * @Optimized for Scans@: Scans are required for a __ScanAndCompare__ gc algorithm. __ZooKeeper__ is the default implemention for BookKeeper metadata management, __ZooKeeper__ holds data in memory and provides filesystem-like namespace and also meets all the above requirements. __ZooKeeper__ could meet most of usages for BookKeeper. However, if you application needs to manage millions of ledgers, a more scalable solution would be __HBase__, which also meet the above requirements, but it more complicated to set up. bookkeeper-release-4.2.4/doc/bookkeeperOverview.textile000066400000000000000000000427451244507361200233270ustar00rootroot00000000000000Title: BookKeeper overview Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Abstract This guide contains detailed information about using BookKeeper for logging. It discusses the basic operations BookKeeper supports, and how to create logs and perform basic read and write operations on these logs. h1. BookKeeper introduction p. BookKeeper is a replicated service to reliably log streams of records. In BookKeeper, servers are "bookies", log streams are "ledgers", and each unit of a log (aka record) is a "ledger entry". BookKeeper is designed to be reliable; bookies, the servers that store ledgers, can crash, corrupt data, discard data, but as long as there are enough bookies behaving correctly the service as a whole behaves correctly. p. The initial motivation for BookKeeper comes from the namenode of HDFS. Namenodes have to log operations in a reliable fashion so that recovery is possible in the case of crashes. We have found the applications for BookKeeper extend far beyond HDFS, however. Essentially, any application that requires an append storage can replace their implementations with BookKeeper. BookKeeper has the advantage of writing efficiently, replicating for fault tolerance, and scaling throughput with the number of servers through striping. p. At a high level, a bookkeeper client receives entries from a client application and stores it to sets of bookies, and there are a few advantages in having such a service: * We can use hardware that is optimized for such a service. We currently believe that such a system has to be optimized only for disk I/O; * We can have a pool of servers implementing such a log system, and shared among a number of servers; * We can have a higher degree of replication with such a pool, which makes sense if the hardware necessary for it is cheaper compared to the one the application uses. h1. In slightly more detail... p. BookKeeper implements highly available logs, and it has been designed with write-ahead logging in mind. Besides high availability due to the replicated nature of the service, it provides high throughput due to striping. As we write entries in a subset of bookies of an ensemble and rotate writes across available quorums, we are able to increase throughput with the number of servers for both reads and writes. Scalability is a property that is possible to achieve in this case due to the use of quorums. Other replication techniques, such as state-machine replication, do not enable such a property. p. An application first creates a ledger before writing to bookies through a local BookKeeper client instance. Upon creating a ledger, a BookKeeper client writes metadata about the ledger to ZooKeeper. Each ledger currently has a single writer. This writer has to execute a close ledger operation before any other client can read from it. If the writer of a ledger does not close a ledger properly because, for example, it has crashed before having the opportunity of closing the ledger, then the next client that tries to open a ledger executes a procedure to recover it. As closing a ledger consists essentially of writing the last entry written to a ledger to ZooKeeper, the recovery procedure simply finds the last entry written correctly and writes it to ZooKeeper. p. Note that currently this recovery procedure is executed automatically upon trying to open a ledger and no explicit action is necessary. Although two clients may try to recover a ledger concurrently, only one will succeed, the first one that is able to create the close znode for the ledger. h1. Bookkeeper elements and concepts p. BookKeeper uses four basic elements: * _Ledger_ : A ledger is a sequence of entries, and each entry is a sequence of bytes. Entries are written sequentially to a ledger and at most once. Consequently, ledgers have an append-only semantics; * _BookKeeper client_ : A client runs along with a BookKeeper application, and it enables applications to execute operations on ledgers, such as creating a ledger and writing to it; * _Bookie_ : A bookie is a BookKeeper storage server. Bookies store the content of ledgers. For any given ledger L, we call an _ensemble_ the group of bookies storing the content of L. For performance, we store on each bookie of an ensemble only a fragment of a ledger. That is, we stripe when writing entries to a ledger such that each entry is written to sub-group of bookies of the ensemble. * _Metadata storage service_ : BookKeeper requires a metadata storage service to store information related to ledgers and available bookies. We currently use ZooKeeper for such a task. h1. Bookkeeper initial design p. A set of bookies implements BookKeeper, and we use a quorum-based protocol to replicate data across the bookies. There are basically two operations to an existing ledger: read and append. Here is the complete API list (mode detail "here":bookkeeperProgrammer.html): * Create ledger: creates a new empty ledger; * Open ledger: opens an existing ledger for reading; * Add entry: adds a record to a ledger either synchronously or asynchronously; * Read entries: reads a sequence of entries from a ledger either synchronously or asynchronously p. There is only a single client that can write to a ledger. Once that ledger is closed or the client fails, no more entries can be added. (We take advantage of this behavior to provide our strong guarantees.) There will not be gaps in the ledger. Fingers get broken, people get roughed up or end up in prison when books are manipulated, so there is no deleting or changing of entries. !images/bk-overview.jpg! p. A simple use of BooKeeper is to implement a write-ahead transaction log. A server maintains an in-memory data structure (with periodic snapshots for example) and logs changes to that structure before it applies the change. The application server creates a ledger at startup and store the ledger id and password in a well known place (ZooKeeper maybe). When it needs to make a change, the server adds an entry with the change information to a ledger and apply the change when BookKeeper adds the entry successfully. The server can even use asyncAddEntry to queue up many changes for high change throughput. BooKeeper meticulously logs the changes in order and call the completion functions in order. p. When the application server dies, a backup server will come online, get the last snapshot and then it will open the ledger of the old server and read all the entries from the time the snapshot was taken. (Since it doesn't know the last entry number it will use MAX_INTEGER). Once all the entries have been processed, it will close the ledger and start a new one for its use. p. A client library takes care of communicating with bookies and managing entry numbers. An entry has the following fields: |Field|Type|Description| |Ledger number|long|The id of the ledger of this entry| |Entry number|long|The id of this entry| |last confirmed ( _LC_ )|long|id of the last recorded entry| |data|byte[]|the entry data (supplied by application)| |authentication code|byte[]|Message authentication code that includes all other fields of the entry| p. The client library generates a ledger entry. None of the fields are modified by the bookies and only the first three fields are interpreted by the bookies. p. To add to a ledger, the client generates the entry above using the ledger number. The entry number will be one more than the last entry generated. The _LC_ field contains the last entry that has been successfully recorded by BookKeeper. If the client writes entries one at a time, _LC_ is the last entry id. But, if the client is using asyncAddEntry, there may be many entries in flight. An entry is considered recorded when both of the following conditions are met: * the entry has been accepted by a quorum of bookies * all entries with a lower entry id have been accepted by a quorum of bookies _LC_ seems mysterious right now, but it is too early to explain how we use it; just smile and move on. p. Once all the other fields have been field in, the client generates an authentication code with all of the previous fields. The entry is then sent to a quorum of bookies to be recorded. Any failures will result in the entry being sent to a new quorum of bookies. p. To read, the client library initially contacts a bookie and starts requesting entries. If an entry is missing or invalid (a bad MAC for example), the client will make a request to a different bookie. By using quorum writes, as long as enough bookies are up we are guaranteed to eventually be able to read an entry. h1. Bookkeeper metadata management p. There are some meta data that needs to be made available to BookKeeper clients: * The available bookies; * The list of ledgers; * The list of bookies that have been used for a given ledger; * The last entry of a ledger; p. We maintain this information in ZooKeeper. Bookies use ephemeral nodes to indicate their availability. Clients use znodes to track ledger creation and deletion and also to know the end of the ledger and the bookies that were used to store the ledger. Bookies also watch the ledger list so that they can cleanup ledgers that get deleted. h1. Closing out ledgers p. The process of closing out the ledger and finding the last entry is difficult due to the durability guarantees of BookKeeper: * If an entry has been successfully recorded, it must be readable. * If an entry is read once, it must always be available to be read. p. If the ledger was closed gracefully, ZooKeeper will have the last entry and everything will work well. But, if the BookKeeper client that was writing the ledger dies, there is some recovery that needs to take place. p. The problematic entries are the ones at the end of the ledger. There can be entries in flight when a BookKeeper client dies. If the entry only gets to one bookie, the entry should not be readable since the entry will disappear if that bookie fails. If the entry is only on one bookie, that doesn't mean that the entry has not been recorded successfully; the other bookies that recorded the entry might have failed. p. The trick to making everything work is to have a correct idea of a last entry. We do it in roughly three steps: # Find the entry with the highest last recorded entry, _LC_ ; # Find the highest consecutively recorded entry, _LR_ ; # Make sure that all entries between _LC_ and _LR_ are on a quorum of bookies; h1. Data Management in Bookies p. This section gives an overview of how a bookie manages its ledger fragments. h2. Basic p. Bookies manage data in a log-structured way, which is implemented using three kind of files: * _Journal_ : A journal file contains the BookKeeper transaction logs. Before any update takes place, a bookie ensures that a transaction describing the update is written to non-volatile storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold. * _Entry Log_ : An entry log file manages the written entries received from BookKeeper clients. Entries from different ledgers are aggregated and written sequentially, while their offsets are kept as pointers in _LedgerCache_ for fast lookup. A new entry log file is created once the bookie starts or the older entry log file reaches the entry log size threshold. Old entry log files are removed by the _Garbage Collector Thread_ once they are not associated with any active ledger. * _Index File_ : An index file is created for each ledger, which comprises a header and several fixed-length index pages, recording the offsets of data stored in entry log files. p. Since updating index files would introduce random disk I/O, for performance consideration, index files are updated lazily by a _Sync Thread_ running in the background. Before index pages are persisted to disk, they are gathered in _LedgerCache_ for lookup. * _LedgerCache_ : A memory pool caches ledger index pages, which more efficiently manage disk head scheduling. h2. Add Entry p. When a bookie receives entries from clients to be written, these entries will go through the following steps to be persisted to disk: # Append the entry in _Entry Log_, return its position { logId , offset } ; # Update the index of this entry in _Ledger Cache_ ; # Append a transaction corresponding to this entry update in _Journal_ ; # Respond to BookKeeper client ; * For performance reasons, _Entry Log_ buffers entries in memory and commit them in batches, while _Ledger Cache_ holds index pages in memory and flushes them lazily. We will discuss data flush and how to ensure data integrity in the following section 'Data Flush'. h2. Data Flush p. Ledger index pages are flushed to index files in the following two cases: # _LedgerCache_ memory reaches its limit. There is no more space available to hold newer index pages. Dirty index pages will be evicted from _LedgerCache_ and persisted to index files. # A background thread _Sync Thread_ is responsible for flushing index pages from _LedgerCache_ to index files periodically. p. Besides flushing index pages, _Sync Thread_ is responsible for rolling journal files in case that journal files use too much disk space. p. The data flush flow in _Sync Thread_ is as follows: # Records a _LastLogMark_ in memory. The _LastLogMark_ contains two parts: first one is _txnLogId_ (file id of a journal) and the second one is _txnLogPos_ (offset in a journal). The _LastLogMark_ indicates that those entries before it have been persisted to both index and entry log files. # Flushes dirty index pages from _LedgerCache_ to index file, and flushes entry log files to ensure all buffered entries in entry log files are persisted to disk. #* Ideally, a bookie just needs to flush index pages and entry log files that contains entries before _LastLogMark_. There is no such information in _LedgerCache_ and _Entry Log_ mapping to journal files, though. Consequently, the thread flushes _LedgerCache_ and _Entry Log_ entirely here, and may flush entries after the _LastLogMark_. Flushing more is not a problem, though, just redundant. # Persists _LastLogMark_ to disk, which means entries added before _LastLogMark_ whose entry data and index page were also persisted to disk. It is the time to safely remove journal files created earlier than _txnLogId_. #* If the bookie has crashed before persisting _LastLogMark_ to disk, it still has journal files containing entries for which index pages may not have been persisted. Consequently, when this bookie restarts, it inspects journal files to restore those entries; data isn't lost. p. Using the above data flush mechanism, it is safe for the _Sync Thread_ to skip data flushing when the bookie shuts down. However, in _Entry Logger_, it uses _BufferedChannel_ to write entries in batches and there might be data buffered in _BufferedChannel_ upon a shut down. The bookie needs to ensure _Entry Logger_ flushes its buffered data during shutting down. Otherwise, _Entry Log_ files become corrupted with partial entries. p. As described above, _EntryLogger#flush_ is invoked in the following two cases: * in _Sync Thread_ : used to ensure entries added before _LastLogMark_ are persisted to disk. * in _ShutDown_ : used to ensure its buffered data persisted to disk to avoid data corruption with partial entries. h2. Data Compaction p. In bookie server, entries of different ledgers are interleaved in entry log files. A bookie server runs a _Garbage Collector_ thread to delete un-associated entry log files to reclaim disk space. If a given entry log file contains entries from a ledger that has not been deleted, then the entry log file would never be removed and the occupied disk space never reclaimed. In order to avoid such a case, a bookie server compacts entry log files in _Garbage Collector_ thread to reclaim disk space. p. There are two kinds of compaction running with different frequency, which are _Minor Compaction_ and _Major Compaction_. The differences of _Minor Compaction_ and _Major Compaction_ are just their threshold value and compaction interval. # _Threshold_ : Size percentage of an entry log file occupied by those undeleted ledgers. Default minor compaction threshold is 0.2, while major compaction threshold is 0.8. # _Interval_ : How long to run the compaction. Default minor compaction is 1 hour, while major compaction threshold is 1 day. p. NOTE: if either _Threshold_ or _Interval_ is set to less than or equal to zero, then compaction is disabled. p. The data compaction flow in _Garbage Collector Thread_ is as follows: # _Garbage Collector_ thread scans entry log files to get their entry log metadata, which records a list of ledgers comprising an entry log and their corresponding percentages. # With the normal garbage collection flow, once the bookie determines that a ledger has been deleted, the ledger will be removed from the entry log metadata and the size of the entry log reduced. # If the remaining size of an entry log file reaches a specified threshold, the entries of active ledgers in the entry log will be copied to a new entry log file. # Once all valid entries have been copied, the old entry log file is deleted. bookkeeper-release-4.2.4/doc/bookkeeperProgrammer.textile000066400000000000000000000141141244507361200236210ustar00rootroot00000000000000Title: BookKeeper Getting Started Guide Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Abstract This guide contains detailed information about using BookKeeper for write ahead logging. It discusses the basic operations BookKeeper supports, and how to create logs and perform basic read and write operations on these logs. The main classes used by BookKeeper client are "BookKeeper":./apidocs/org/apache/bookkeeper/client/BookKeeper.html and "LedgerHandle":./apidocs/org/apache/bookkeeper/client/LedgerHandle.html. BookKeeper is the main client used to create, open and delete ledgers. A ledger is a log file in BookKeeper, which contains a sequence of entries. Only the client which creates a ledger can write to it. A LedgerHandle represents the ledger to the client, and allows the client to read and write entries. When the client is finished writing they can close the LedgerHandle. Once a ledger has been closed, all client who read from it are guaranteed to read the exact same entries in the exact same order. All methods of BookKeeper and LedgerHandle have synchronous and asynchronous versions. Internally the synchronous versions are implemented using the asynchronous. h1. Instantiating BookKeeper To create a BookKeeper client, you need to create a configuration object and set the address of the ZooKeeper ensemble in use. For example, if you were using @zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181@ as your ensemble, you would create the BookKeeper client as follows.

ClientConfiguration conf = new ClientConfiguration();
conf.setZkServers("zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181"); 

BookKeeper client = new BookKeeper(conf);
It is important to close the client once you are finished working with it. The set calls on ClientConfiguration are chainable, so instead of putting a set* call on a new line as above, it is possible to make a number of calls on the one line. For example;

ClientConfiguration conf = new ClientConfiguration().setZkServers("localhost:2181").setZkTimeout(5000);
There is also a useful shortcut constructor which allows you to pass the zookeeper ensemble string directly to BookKeeper.

BookKeeper client = new BookKeeper("localhost:2181");
See "BookKeeper":./apidocs/org/apache/bookkeeper/client/BookKeeper.html for the full api. h1. Creating a ledger p. Before writing entries to BookKeeper, it is necessary to create a ledger. Before creating the ledger you must decide the ensemble size and the quorum size. p. The ensemble size is the number of Bookies over which entries will be striped. The quorum size is the number of bookies which an entry will be written to. Striping is done in a round robin fashion. For example, if you have an ensemble size of 3 (consisting of bk1, bk2 & bk3), and a quorum of 2, entry 1 will be written to bk1 & bk2, entry 2 will be written to bk2 & bk3, entry 3 will be written to bk3 & bk1 and so on. p. Ledgers are also created with a digest type and password. The digest type is used to generate a checksum so that when reading entries we can ensure that the content is the same as what was written. The password is used as an access control mechanism. p. To create a ledger, with ensemble size 3, quorum size 2, using a CRC to checksum and "foobar" as the password, do the following:

LedgerHandle lh = client.createLedger(3, 2, DigestType.CRC32, "foobar");
You can now write to this ledger handle. As you probably plan to read the ledger at some stage, now is a good time to store the id of the ledger somewhere. The ledger id is a long, and can be obtained with @lh.getId()@. h1. Adding entries to a ledger p. Once you have obtained a ledger handle, you can start adding entries to it. Entries are simply arrays of bytes. As such, adding entries to the ledger is rather simple.

lh.addEntry("Hello World!".getBytes());
h1. Closing a ledger p. Once a client is done writing, it can closes the ledger. Closing the ledger is a very important step in BookKeeper, as once a ledger is closed, all reading clients are guaranteed to read the same sequence of entries in the same order. Closing takes no parameters.

lh.close();
h1. Opening a ledger To read from a ledger, a client must open it first. To open a ledger you must know its ID, which digest type was used when creating it, and its password. To open the ledger we created above, assuming it has ID 1;

LedgerHandle lh2 = client.openLedger(1, DigestType.CRC32, "foobar");
You can now read entries from the ledger. Any attempt to write to this handle will throw an exception. bq. NOTE: Opening a ledger, which another client already has open for writing will prevent that client from writing any new entries to it. If you do not wish this to happen, you should use the openLedgerNoRecovery method. However, keep in mind that without recovery, you lose the guarantees of what entries are in the ledger. You should only use openLedgerNoRecovery if you know what you are doing. h1. Reading entries from a ledger p. Now that you have an open ledger, you can read entries from it. You can use @getLastAddConfirmed@ to get the id of the last entry in the ledger.

long lastEntry = lh2.getLastAddConfirmed();
Enumeration entries = lh2.readEntries(0, 9);
while (entries.hasMoreElements()) {
	byte[] bytes = entries.nextElement().getEntry();
	System.out.println(new String(bytes));
}
bookkeeper-release-4.2.4/doc/bookkeeperStarted.textile000066400000000000000000000100361244507361200231130ustar00rootroot00000000000000Title: BookKeeper Getting Started Guide Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Abstract This guide contains detailed information about using BookKeeper for logging. It discusses the basic operations BookKeeper supports, and how to create logs and perform basic read and write operations on these logs. h1. Getting Started: Setting up BookKeeper to write logs. p. This document contains information to get you started quickly with BookKeeper. It is aimed primarily at developers willing to try it out, and contains simple installation instructions for a simple BookKeeper installation and a simple programming example. For further programming detail, please refer to "BookKeeper Programmer's Guide":bookkeeperProgrammer.html. h1. Pre-requisites p. See "System Requirements":./bookkeeperConfig.html#bk_sysReqin the Admin guide. h1. Download p. BookKeeper trunk can be downloaded from subversion. See "Version Control:http://zookeeper.apache.org/bookkeeper/svn.html. h1. LocalBookKeeper p. BookKeeper provides a utility program to start a standalone ZooKeeper ensemble and a number of bookies on a local machine. As this all runs on a local machine, throughput will be very low. It should only be used for testing. p. To start a local bookkeeper ensemble with 5 bookies: @bookkeeper-server/bin/bookkeeper localbookie 5@ h1. Setting up bookies p. If you're bold and you want more than just running things locally, then you'll need to run bookies in different servers. You'll need at least three bookies to start with. p. For each bookie, we need to execute a command like the following: @bookkeeper-server/bin/bookkeeper bookie@ p. This command will use the default directories for storing ledgers and the write ahead log, and will look for a zookeeper server on localhost:2181. See the "Admin Guide":./bookkeeperConfig.html for more details. p. To see the default values of these configuration variables, run: @bookkeeper-server/bin/bookkeeper help@ h1. Setting up ZooKeeper p. ZooKeeper stores metadata on behalf of BookKeeper clients and bookies. To get a minimal ZooKeeper installation to work with BookKeeper, we can set up one server running in standalone mode. Once we have the server running, we need to create a few znodes: # @/ledgers @ # @/ledgers/available @ p. We provide a way of bootstrapping it automatically. See the "Admin Guide":./bookkeeperConfig.html for a description of how to bootstrap automatically, and in particular the shell metaformat command. h1. Example p. In the following excerpt of code, we: # Open a bookkeeper client; # Create a ledger; # Write to the ledger; # Close the ledger; # Open the same ledger for reading; # Read from the ledger; # Close the ledger again; # Close the bookkeeper client.

BookKeeper bkc = new BookKeeper("localhost:2181");
LedgerHandle lh = bkc.createLedger(ledgerPassword);
ledgerId = lh.getId();
ByteBuffer entry = ByteBuffer.allocate(4);

for(int i = 0; i < 10; i++){
	entry.putInt(i);
	entry.position(0);
	entries.add(entry.array());				
	lh.addEntry(entry.array());
}
lh.close();
lh = bkc.openLedger(ledgerId, ledgerPassword);		
			
Enumeration ls = lh.readEntries(0, 9);
int i = 0;
while(ls.hasMoreElements()){
	ByteBuffer origbb = ByteBuffer.wrap(
				entries.get(i++));
	Integer origEntry = origbb.getInt();
	ByteBuffer result = ByteBuffer.wrap(
				ls.nextElement().getEntry());

	Integer retrEntry = result.getInt();
}
lh.close();
bkc.close();
bookkeeper-release-4.2.4/doc/bookkeeperStream.textile000066400000000000000000000101431244507361200227370ustar00rootroot00000000000000Title: Streaming with BookKeeper Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Abstract This guide contains detailed information about using how to stream bytes on top of BookKeeper. It essentially motivates and discusses the basic stream operations currently supported. h1. Summary p. When using the BookKeeper API, an application has to split the data to write into entries, each entry being a byte array. This is natural for many applications. For example, when using BookKeeper for write-ahead logging, an application typically wants to write the modifications corresponding to a command or a transaction. Some other applications, however, might not have a natural boundary for entries, and may prefer to write and read streams of bytes. This is exactly the purpose of the stream API we have implemented on top of BookKeeper. p. The stream API is implemented in the package @Streaming@ , and it contains two main classes: @LedgerOutputStream@ and @LedgerInputStream@ . The class names are indicative of what they do. h1. Writing a stream of bytes p. Class @LedgerOutputStream@ implements two constructors and five public methods: @public LedgerOutputStream(LedgerHandle lh) @ p. where: * @lh@ is a ledger handle for a previously created and open ledger. @public LedgerOutputStream(LedgerHandle lh, int size) @ p. where: * @lh@ is a ledger handle for a previously created and open ledger. * @size@ is the size of the byte buffer to store written bytes before flushing. _Closing a stream._ This call closes the stream by flushing the write buffer. @public void close() @ p. which has no parameters. _Flushing a stream._ This call essentially flushes the write buffer. @public synchronized void flush() @ p. which has no parameters. _Writing bytes._ There are three calls for writing bytes to a stream. @public synchronized void write(byte[] b) @ p. where: * @b@ is an array of bytes to write. @public synchronized void write(byte[] b, int off, int len) @ p. where: * @b@ is an array of bytes to write. * @off@ is a buffer offset. * @len@ is the length to write. @public synchronized void write(int b) @ p. where: * @b@ contains a byte to write. The method writes the least significant byte of the integer four bytes. h1. Reading a stream of bytes p. Class @LedgerOutputStream@ implements two constructors and four public methods: @public LedgerInputStream(LedgerHandle lh) throws BKException, InterruptedException @ p. where: * @lh@ is a ledger handle for a previously created and open ledger. @public LedgerInputStream(LedgerHandle lh, int size) throws BKException, InterruptedException @ p. where: * @lh@ is a ledger handle for a previously created and open ledger. * @size@ is the size of the byte buffer to store bytes that the application will eventually read. _Closing._ There is one call to close an input stream, but the call is currently empty and the application is responsible for closing the ledger handle. @public void close() @ p. which has no parameters. _Reading._ There are three calls to read from the stream. @public synchronized int read() throws IOException @ p. which has no parameters. @public synchronized int read(byte[] b) throws IOException @ p. where: * @b@ is a byte array to write to. @public synchronized int read(byte[] b, int off, int len) throws IOException @ p. where: * @b@ is a byte array to write to. * @off@ is an offset for byte array @b@ . * @len@ is the length in bytes to write to @b@ . bookkeeper-release-4.2.4/doc/doc.textile000066400000000000000000000023301244507361200202010ustar00rootroot00000000000000Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . In the documentation directory, you'll find: * @build.txt@: Building Hedwig, or how to set up Hedwig * @user.txt@: User's Guide, or how to program against the Hedwig API and how to run it * @dev.txt@: Developer's Guide, or Hedwig internals and hacking details These documents are all written in the "Pandoc":http://johnmacfarlane.net/pandoc/ dialect of "Markdown":http://daringfireball.net/projects/markdown/. This makes them readable as plain text files, but also capable of generating HTML or LaTeX documentation. Documents are wrapped at 80 chars and use 2-space indentation. bookkeeper-release-4.2.4/doc/hedwigBuild.textile000066400000000000000000000043231244507361200216670ustar00rootroot00000000000000Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Pre-requisites For the core itself: * JDK 6: "http://java.sun.com/":http://java.sun.com/. Ensure @$JAVA_HOME@ is correctly set. * Maven 2: "http://maven.apache.org/":http://maven.apache.org/. Hedwig has been tested on Windows XP, Linux 2.6, and OS X. h1. Command-Line Instructions From the top level bookkeeper directory, run @mvn package@. This will compile and package the jars necessary for running hedwig. See the User's Guide for instructions on running and usage. h1. Eclipse Instructions To check out, build, and develop using Eclipse: # Install the Subclipse plugin. Update site: "http://subclipse.tigris.org/update_1.4.x":http://subclipse.tigris.org/update_1.4.x. # Install the Maven plugin. Update site: "http://m2eclipse.sonatype.org/update":http://m2eclipse.sonatype.org/update. From the list of packages available from this site, select everything under the "Maven Integration" category, and from the optional components select the ones with the word "SCM" in them. # Go to Preferences > Team > SVN. For the SVN interface, choose "Pure Java". # Choose File > New > Project... > Maven > Checkout Maven Projects from SCM. # For the SCM URL type, choose SVN. For the URL, enter SVN URL. Maven will automatically create a top-level Eclipse project for each of the 4 Maven modules (recommended). If you want fewer top-level projects, uncheck the option of having a project for each module (under Advanced). You are now ready to run and debug the client and server code. See the User's Guide for instructions on running and usage. bookkeeper-release-4.2.4/doc/hedwigConsole.textile000066400000000000000000000136001244507361200222300ustar00rootroot00000000000000Title: Hedwig Console Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . . h1. Hedwig Console Apache Hedwig provides a console client, which allows users and administrators to interact with a hedwig cluster. h2. Connecting to hedwig cluster Hedwig console client is shipped with hedwig server package. p. To start the console client: @hedwig-server/bin/hedwig console@ p. By default, the console client connects to hub server on localhost. If you want the console client to connect to a different hub server, you can override following environment variables. | @HEDWIG_CONSOLE_SERVER_CONF@ | Path of a hub server configuration file. Override to make hedwig console client connect to correct zookeeper cluster. | | @HEDWIG_CONSOLE_CLIENT_CONF@ | Path of a hedwig client configuration file. Override to make hedwig console client communicate with correct hub servers. | p. Once connected, you should see something like:
Connecting to zookeeper/bookkeeper using HedwigAdmin

Connecting to default hub server localhost/127.0.0.1:4080
Welcome to Hedwig!
JLine support is enabled
JLine history support is enabled
[hedwig: (standalone) 16] 
p. From the shell, type __help__ to get a list of commands that can be executed from the client:
[hedwig: (standalone) 16] help
HedwigConsole [options] [command] [args]

Available commands:
        pub
        sub
        closesub
        unsub
        rmsub
        consume
        consumeto
        pubsub
        show
        describe
        readtopic
        set
        history
        redo
        help
        quit
        exit

Finished 0.0020 s.
p. If you want to know detail usage for each command, type __help {command}__ in the shell. For example:
[hedwig: (standalone) 17] help pub
pub: Publish a message to a topic in Hedwig
usage: pub {topic} {message}

  {topic}   : topic name.
              any printable string without spaces.
  {message} : message body.
              remaining arguments are used as message body to publish.

Finished 0.0 s.
h2. Commands All the available commands provided in hedwig console could be categorized into three groups. They are __interactive commands__, __admin commands__, __utility commands__. h3. Interactive Commands p. Interactive commands are used by users to communicate with a hedwig cluster. They are __pub__, __sub__, __closesub__, __unsub__, __consume__ and __consumeto__. p. These commands are quite simple and have same semantics as the API provided in hedwig client. h3. Admin Commands p. Admin commands are used by administrators to operate or debug a hedwig cluster. They are __show__, __describe__, __pubsub__ and __readtopic__. p. __show__ is used to list all available hub servers or topics in the cluster. p. You could use __show__ to list hub servers to know how many hub servers are alive in the cluster.
[hedwig: (standalone) 27] show hubs
Available Hub Servers:
        192.168.1.102:4080:9876 :       0
Finished 0.0040 s.
p. Also, you could use __show__ to list all topics. If you have a lot of topics on the clusters, this command will take a long time to run.
[hedwig: (standalone) 28] show topics
Topic List:
[mytopic]
Finished 0.0020 s.
p. To see the details of a topic, run __describe__. This shows the metadata of a topic, including topic owner, persistence info, subscriptions info.
[hedwig: (standalone) 43] describe topic mytopic
===== Topic Information : mytopic =====

Owner : 192.168.1.102:4080:9876

>>> Persistence Info <<<
Ledger 3 [ 1 ~ 9 ]

>>> Subscription Info <<<
Subscriber mysub : consumeSeqId: local:0

Finished 0.011 s.
p. When you are run the __describe__ command, you should keep in mind that __describe__ command reads the metadata from __ZooKeeper__ directly, so the subscription info might not be completely up to date due to the fact that hub servers update the subscription metadata lazily. p. The __readtopic__ command is useful to see which messages have not been consumed by the client.
[hedwig: (standalone) 46] readtopic mytopic

>>>>> Ledger 3 [ 1 ~ 9] <<<<<

---------- MSGID=LOCAL(1) ----------
MsgId:     LOCAL(1)
SrcRegion: standalone
Message:

hello

---------- MSGID=LOCAL(2) ----------
MsgId:     LOCAL(2)
SrcRegion: standalone
Message:

hello 2

---------- MSGID=LOCAL(3) ----------
MsgId:     LOCAL(3)
SrcRegion: standalone
Message:

hello 3

...
p. __pubsub__ is another useful command for administrators. It can be used to test availability and functionality of a cluster. It generates a temporary subscriber id with the current timestamp, subscribes to the given topic using generated subscriber id, publishes a message to given topic and testes whether the subscriber received the message.
[hedwig: (standalone) 48] pubsub testtopic testsub- 10 test message for availability
Starting PUBSUB test ...
Sub topic testtopic, subscriber id testsub--1338126964504
Pub topic testtopic : test message for availability-1338126964504
Received message : test message for availability-1338126964504
PUBSUB SUCCESS. TIME: 377 MS
Finished 0.388 s.
h3. Utility Commands p. Utility Commands are __help__, __history__, __redo__, __quit__ and __exit__. p. __quit__ and __exit__ are used to exit console, while __history__ and __redo__ are used to manage the history of commands executed in the shell. bookkeeper-release-4.2.4/doc/hedwigDesign.textile000066400000000000000000000151371244507361200220460ustar00rootroot00000000000000Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Style We have provided an Eclipse Formatter file @formatter.xml@ with all the formatting conventions currently used in the project. Highlights include no tabs, 4-space indentation, and 120-char width. Please respect this so as to reduce the amount of formatting-related noise produced in commits. h1. Static Analysis We would like to use static analysis tools PMD and FindBugs to maintain code quality. However, we have not yet arrived at a consensus on what rules to adhere to, and what to ignore. h1. Netty Notes The asynchronous network IO infrastructure that Hedwig uses is "Netty":http://www.jboss.org/netty. Here are some notes on Netty's concurrency architecture and its filter pipeline design. h2. Concurrency Architecture After calling @ServerBootstrap.bind()@, Netty starts a boss thread (@NioServerSocketPipelineSink.Boss@) that just accepts new connections and registers them with one of the workers from the @NioWorker@ pool in round-robin fashion (pool size defaults to CPU count). Each worker runs its own select loop over just the set of keys that have been registered with it. Workers start lazily on demand and run only so long as there are interested fd's/keys. All selected events are handled in the same thread and sent up the pipeline attached to the channel (this association is established by the boss as soon as a new connection is accepted). All workers, and the boss, run via the executor thread pool; hence, the executor must support at least two simultaneous threads. h2. Handler Pipeline A pipeline implements the intercepting filter pattern. A pipeline is a sequence of handlers. Whenever a packet is read from the wire, it travels up the stream, stopping at each handler that can handle upstream events. Vice-versa for writes. Between each filter, control flows back through the centralized pipeline, and a linked list of contexts keeps track of where we are in the pipeline (one context object per handler). h1. Pseudocode This summarizes the control flow through the system. h2. publish Need to document h2. subscribe Need to document h1. ReadAhead Cache The delivery manager class is responsible for pushing published messages from the hubs to the subscribers. The most common case is that all subscribers are connected and either caught up, or close to the tail end of the topic. In this case, we don't want the delivery manager to be polling bookkeeper for any newly arrived messages on the topic; new messages should just be pushed to the delivery manager. However, there is also the uncommon case when a subscriber is behind, and messages must be pulled from Bookkeeper. Since all publishes go through the hub, it is possible to cache the recently published messages in the hub, and then the delivery manager won't have to make the trip to bookkeeper to get the messages but instead get them from local process memory. These ideas of push, pull, and caching are unified in the following way: - A hub has a cache of messages * When the delivery manager wants to deliver a message, it asks the cache for it. There are 3 cases: * The message is available in the cache, in which case it is given to the delivery manager * The message is not present in the cache and the seq-id of the message is beyond the last message published on that topic (this happens if the subscriber is totally caught up for that topic). In this case, a stub is put in the cache in order to notify the delivery manager when that message does happen to be published. * The message is not in the cache but has been published to the topic. In this case, a stub is put in the cache, and a read is issued to bookkeeper. * Whenever a message is published, it is cached. If there is a stub already in the cache for that message, the delivery manager is notified. * Whenever a message is read from bookkeeper, it is cached. There must be a stub for that message (since reads to bookkeeper are issued only after putting a stub), so the delivery manager is notified. * The cache does readahead, i.e., if a message requested by the delivery manager is not in the cache, a stub is established not only for that message, but also for the next n messages where n is configurable (default 10). On a cache hit, we look ahead n/2 messages, and if that message is not present, we establish another n/2 stubs. In short, we always ensure that the next n stubs are always established. * Over time, the cache will grow in size. There are 2 pruning mechanisms: * Once all subscribers have consumed up to a particular seq-id, they notify the cache, and all messages up to that seq-id are pruned from the cache. * If the above pruning is not working (e.g., because some subscribers are down), the cache will eventually hit its size limit which is configurable (default, half of maximum jvm heap size). At this point, messages are just pruned in FIFO order. We use the size of the blobs in the message for estimating the cache size. The assumption is that that size will dominate over fixed, object-level size overheads. * Stubs are not purged because according to the above simplification, they are of 0 size. h1. Scalability Bottlenecks Down the Road * Currently each topic subscription is served on a different channel. The number of channels will become a bottleneck at higher channels. We should switch to an architecture, where multiple topic subscriptions between the same client, hub pair should be served on the same channel. We can have commands to start, stop subscriptions sent all the way to the server (right now these are local). * Publishes for a topic are serialized through a hub, to get ordering guarantees. Currently, all subscriptions to that topic are served from the same hub. If we start having large number of subscribers to heavy-volume topics, the outbound bandwidth at the hub, or the CPU at that hub might become the bottleneck. In that case, we can setup other regions through which the messages are routed (this hierarchical scheme) reduces bandwidth requirements at any single node. It should be possible to do this entirely through configuration. bookkeeper-release-4.2.4/doc/hedwigJMX.textile000066400000000000000000000040741244507361200212710ustar00rootroot00000000000000Title: Hedwig JMX Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . . h1. JMX Apache Hedwig has extensive support for JMX, which allows viewing and managing a hedwig cluster. This document assumes that you have basic knowledge of JMX. See "Sun JMX Technology":http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/ page to get started with JMX. See the "JMX Management Guide":http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html for details on setting up local and remote management of VM instances. By default the included __hedwig__ script supports only local management - review the linked document to enable support for remote management (beyond the scope of this document). __Hub Server__ is a JMX manageable server, which registers the proper MBeans during initialization to support JMX monitoring and management of the instance. h1. Hub Server MBean Reference This table details JMX for a hub server. | _.MBean | _.MBean Object Name | _.Description | | PubSubServer | PubSubServer | Represents a hub server. It is the root MBean for hub server, which includes statistics for a hub server. E.g. number packets sent/received/redirected, and statistics for pub/sub/unsub/consume operations. | | NettyHandlers | NettyHandler | Provide statistics for netty handlers. Currently it just returns number of subscription channels established to a hub server. | | ReadAheadCache | ReadAheadCache | Provide read ahead cache statistics. | bookkeeper-release-4.2.4/doc/hedwigMessageFilter.textile000066400000000000000000000130501244507361200233570ustar00rootroot00000000000000Title: Hedwig Message Filter Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . . h1. Message Filter Apache Hedwig provides an efficient mechanism for supporting application-defined __message filtering__. h2. Message Most message-oriented middleware (MOM) products treat messages as lightweight entities that consist of a header and a payload. The header contains fields used for message routing and identification; the payload contains the application data being sent. Hedwig messages follow a similar template, being composed of following parts: * @Header@ - All messages support both system defined fields and application defined property values. Properties provide an efficient mechanism for supporting application-defined message filtering. * @Body@ - Hedwig considers the message body as a opaque binary blob. * @SrcRegion@ - Indicates where the message comes from. * @MessageSeqId@ - The unique message sequence id assigned by Hedwig. h3. Message Header Properties A __Message__ object contains a built-in facility for supporting application-defined property values. In effect, this provides a mechanism for adding application-specific header fields to a message. By using properties and __message filters__, an application can have Hedwig select, or filter, messages on its behalf using application-specific criteria. Property names must be a __String__ and must not be null, while property values are binary blobs. The flexibility of binary blobs allows applications to define their own serialize/deserialize functions, allowing structured data to be stored in the message header. h2. Message Filter A __Message Filter__ allows an application to specify, via header properties, the messages it is interested in. Only messages which pass validation of a __Message Filter__, specified by a subscriber, are be delivered to the subscriber. A message filter could be run either on the __server side__ or on the __client side__. For both __server side__ and __client side__, a __Message Filter__ implementation needs to implement the following two interfaces: * @setSubscriptionPreferences(topic, subscriberId, preferences)@: The __subscription preferences__ of the subscriber will be passed to message filter when it was attached to its subscription either on the server-side or on the client-side. * @testMessage(message)@: Used to test whether a particular message passes the filter or not. The __subscription preferences__ are used to specify the messages that the user is interested in. The __message filter__ uses the __subscription preferences__ to decide which messages are passed to the user. Take a book store(using topic __BookStore__) as an example: # User A may only care about History books. He subscribes to __BookStore__ with his custom preferences : type="History". # User B may only care about Romance books. He subscribes to __BookStore__ with his custom preferences : type="Romance". # A new book arrives at the book store; a message is sent to __BookStore__ with type="History" in its header # The message is then delivered to __BookStore__'s subscribers. # Subscriber A filters the message by checking messages' header to accept those messages whose type is "History". # Subscriber B filters out the message, as the type does not match its preferences. h3. Client Message Filter. A __ClientMessageFilter__ runs on the client side. Each subscriber can write its own filter and pass it as a parameter when starting delivery ( __startDelivery(topic, subscriberId, messageHandler, messageFilter)__ ). h3. Server Message Filter. A __ServerMessageFilter__ runs on the server side (a hub server). A hub server instantiates a server message filter, by means of reflection, using the message filter class specified in the subscription preferences which are provided by the subscriber. Since __ServerMessageFilter__s run on the hub server, all filtered-out messages are never delivered to client, reducing unnecessary network traffic. Hedwig uses a implementation of __ServerMessageFilter__ to filter unnecessary message deliveries between regions. Since hub servers use reflection to instantiate a __ServerMessageFilter__, an implementation of __ServerMessageFilter__ needs to implement two additional methods: * @initialize(conf)@: Initialize the message filter before filtering messages. * @uninitialize()@: Uninitialize the message filter to release resources used by the message filter. For the hub server to load the message filter, the implementation class must be in the server's classpath at startup. h3. Which message filter should be used? It depends on application requirements. Using a __ServerMessageFilter__ will reduce network traffic by filtering unnecessary messages, but it would compete for resources on the hub server(CPU, memory, etc). Conversely, __ClientMessageFilter__s have the advantage of inducing no extra load on the hub server, but at the price of higher network utilization. A filter can be installed both at the server side and on the client; Hedwig does not restrict this. bookkeeper-release-4.2.4/doc/hedwigMetadata.textile000066400000000000000000000263351244507361200223570ustar00rootroot00000000000000Title: Hedwig Metadata Management Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . . h1. Metadata Management There are two classes of metadata that need to be managed in Hedwig: one is the __list of available hubs__, which is used to track server availability (ZooKeeper is designed naturally for this); while the other is for data structures to track __topic states__ and __subscription states__. This second class can be handled by any key/value store which provides ah __CAS (Compare And Set)__ operation. The metadata in this class are: * @Topic Ownership@: tracks which hub server is assigned to serve requests for a specific topic. * @Topic Persistence Info@: records what __bookkeeper ledgers__ are used to store messages for a specific topic and their message id ranges. * @Subscription Data@: records the preferences and subscription state for a specific subscription (topic, subscriber). Each kind of metadata is handled by a specific metadata manager. They are __TopicOwnershipManager__, __TopicPersistenceManager__ and __SubscriptionDataManager__. h2. Topic Ownership Management There are two ways to management topic ownership. One is leveraging ZooKeeper's ephemeral znodes to record the topic's owner info as a child ephemeral znode under its topic znode. When a hub server, owning a specific topic, crashes, the ephemeral znode which signifies topic ownership will be deleted due to the loss of the zookeeper session. Other hubs can then be assigned the ownership of the topic. The other one is to leverage the __CAS__ operation provided by key/value stores to do leader election. __CAS__ doesn't require the underlying key/value store to provide functionality similar to ZooKeeper's ephemeral nodes. With __CAS__ it is possible to guarantee that only one hub server gains the ownership for a specific topic, which is more scalable and generic solution. The implementation of a __TopicOwnershipManager__ is required to implement following methods:


public void readOwnerInfo(ByteString topic, Callback> callback, Object ctx);

public void writeOwnerInfo(ByteString topic, HubInfo owner, Version version,
                           Callback callback, Object ctx);

public void deleteOwnerInfo(ByteString topic, Version version,
                            Callback callback, Object ctx);

* @readOwnerInfo@: Read the owner info from the underlying key/value store. The implementation should take the responsibility of deserializing the metadata into a __HubInfo__ object identifying a hub server. Also, its current __version__ needs to be returned for future updates. If there is no owner info found for a topic, null value is returned. * @writeOwnerInfo@: Write the owner info into the underlying key/value store with the given __version__. If the current __version__ in underlying key/value store doesn't equal to the provided __version__, the write should be rejected with __BadVersionException__. The new __version__ should be returned for a successful write. __NoTopicOwnerInfoException__ is returned if no owner info found for a topic. * @deleteOwnerInfo@: Delete the owner info from key/value store with the given __version__. The owner info should be removed if the current __version__ in key/value store is equal to the provided __version__. Otherwise, the deletion should be rejected with __BadVersionException__. __NoTopicOwnerInfoException__ is returned if no owner info is found for the topic. h2. Topic Persistence Info Management Similar as __TopicOwnershipManager__, an implementation of __TopicPersistenceManager__ is required to implement READ/WRITE/DELETE interfaces as below:

public void readTopicPersistenceInfo(ByteString topic,
                                     Callback> callback, Object ctx);

public void writeTopicPersistenceInfo(ByteString topic, LedgerRanges ranges, Version version,
                                      Callback callback, Object ctx);

public void deleteTopicPersistenceInfo(ByteString topic, Version version,
                                       Callback callback, Object ctx);
* @readTopicPersistenceInfo@: Read the persistence info from the underlying key/value store. The implementation should take the responsibility of deserializing the metadata into a __LedgerRanges__ object includes the ledgers used to store messages. Also, its current __version__ needs to be returned for future updates. If there is no persistence info found for a topic, a null value is returned. * @writeTopicPersistenceInfo@: Write the persistence info into the underlying key/value store with the given __version__. If the current __version__ in the underlying key/value store doesn't equal the provided __version__, the write should be rejected with __BadVersionException__. The new __version__ should be returned on a successful write. __NoTopicPersistenceInfoException__ is returned if no persistence info is found for a topic. * @deleteTopicPersistenceInfo@: Delete the persistence info from the key/value store with the given __version__. The owner info should be removed if the current __version__ in the key/value store equals the provided __version__. Otherwise, the deletion should be rejected with __BadVersionException__. __NoTopicPersistenceInfoException__ is returned if no persistence info is found for a topic. h2. Subscription Data Management __SubscriptionDataManager__ has similar READ/CREATE/WRITE/DELETE interfaces as other managers. Besides that, the implementation needs to implement __READ SUBSCRIPTIONS__ interface, which is to fetch all the subscriptions for a given topic.

public void createSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData data,
                                   Callback callback, Object ctx);

public boolean isPartialUpdateSupported();

public void updateSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData dataToUpdate, 
                                   Version version, Callback callback, Object ctx);

public void replaceSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData dataToReplace,
                                    Version version, Callback callback, Object ctx);

public void deleteSubscriptionData(ByteString topic, ByteString subscriberId, Version version,
                                   Callback callback, Object ctx);

public void readSubscriptionData(ByteString topic, ByteString subscriberId,
                                 Callback> callback, Object ctx);

public void readSubscriptions(ByteString topic, Callback>> cb,
                              Object ctx);
h3. Create/Update Subscriptions The metadata for a subscription includes two parts, one is preferences and the other one is subscription state. __SubscriptionPreferences__ tracks all the preferences for a subscriber (etc. Application could store its customized preferences for message filtering), while __SubscriptionState__ is used internally to track the message consumption state for a given subscriber. These two kinds of metadata are quite different: __SubscriptionPreferences__ is not updated frequently while __SubscriptionState__ is be updated frequently when messages are consumed. If the underlying key/value store supports independent field update for a given key (subscription), __SubscriptionPreferences__ and __SubscriptionState__ could be stored as two different fields for a given subscription. In this case __isPartialUpdateSupported__ should return true. Otherwise, __isPartialUpdateSupported__ should return false and the implementation should serialize/deserialize __SubscriptionData__ as an opaque blob. * @createSubscriptionData@: Create a subscription entry for a given topic. The initial __version__ would be returned for a success creation. __SubscriptionStateExistsException__ is returned if the subscription entry already exists. * @updateSubscriptionData/replaceSubscriptionData@: Update/replace the subscription data in the underlying key/value store with the given __version__. If the current __version__ in underlying key/value store doesn't equal to the provided __version__, the update should be rejected with __BadVersionException__. The new __version__ should be returned for a successful write. __NoSubscriptionStateException__ is returned if no subscription entry is found for a subscription (topic, subscriber). h3. Read Subscriptions * @readSubscriptionData@: Read the subscription data from the underlying key/value store. The implementation should take the responsibility of deserializing the metadata into a __SubscriptionData__ object including its preferences and subscription state. Also, its current __version__ needs to be returned for future updates. If there is no subscription data found for a subscription, a null value is returned. * @readSubscriptions@: Read all the subscription data from key/value store for a given topic. The implementation should take the responsibility of managing all subscription for a topic for efficient access. An empty map is returned if there are no subscriptions found for a given topic. h3. Delete Subscription * @deleteSubscriptionData@: Delete the subscription data from the key/value store with given __version__ for a specific subscription (topic, subscriber). The subscription info should be removed if current __version__ in key/value store equals the provided __version__. Otherwise, the deletion should be rejected with __BadVersionException__. __NoSubscriptionStateException__ is returned if no subscription data is found for a subscription (topic, subscriber). h1. How to choose a key/value store for Hedwig. From the interface, several requirements needs to meet before picking up a key/value store for Hedwig: * @CAS@: The ability to do strict updates according to specific condition, i.e. a specific version (ZooKeeper) and same content (HBase). * @Optimized for Writes@: The metadata access pattern for Hedwig is read first and continuous updates. * @Optimized for retrieving all subscriptions for a topic@: Either hierarchical structures to maintain such relationships (ZooKeeper), or ordered key/value storage to cluster the subscription for a topic together, would provide efficient subscription data management. __ZooKeeper__ is the default implementation for Hedwig metadata management, which holds data in memory and provides filesystem-like namespace, meeting the above requirements. __ZooKeeper__ is suitable for most Hedwig usecases. However, if your application needs to manage millions of topics/subscriptions, a more scalable solution would be __HBase__, which also meet the above requirements. bookkeeper-release-4.2.4/doc/hedwigParams.textile000066400000000000000000000151161244507361200220550ustar00rootroot00000000000000Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Hedwig configuration parameters This page contains detailed information about configuration parameters used for Hubs, Regions, ZooKeeper, and BookKeeper. h2. Hedwig server configuration parameters Please also refer to the configuration file that comes with the distribution: _hedwig-server/conf/hw_server.conf_. h3. Region related parameters | @region@ | Region identifier. Default is "standalone". | | @regions@ | List of region identifiers, space separated. Default is empty. | | @inter_region_ssl_enabled (deprecated)@ | Enables SSL across regions. Default is false. *Since this parameter has been deprecated, use __ssl_enabled__ in _hedwig-server/conf/hw_region_client.conf_ to enable SSL across regions instead.* | | @retry_remote_subscribe_thread_run_interval@ | This parameter is used to determine how often we run a thread to retry those failed remote subscriptions in asynchronous mode (in milliseconds). Default is 2 minutes. | h3. Hub server parameters | @standalone@ | Sets the hub server to run in standalone mode (no regions). Default is false. | | @server_port@ | Sets the server port that receives client connections. Default is 4080. | | @ssl_enabled@ | Enables SSL. Default is false. | | @ssl_server_port@ | Sets the server port for SSL connections. Default is 9876. | | @password@ | Password used for pkcs12 certificate.. Default is the empty string. | | @cert_name@ | Sets the name of the SSL certificate if available as a resource. Default is the null string. | | @cert_path@ | Sets the path to the SSL certificate if it is available as a file. Default is the null string. | h3. Read-ahead cache parameters | @readahead_enabled@ | Enables read-ahead. Enabled by default. | | @readahead_count@ | Number of messages to read ahead. Default is 10. | | @readahead_size@ | Maximum number of bytes to read during a scan. Default is 4 megabytes. | bq. Upon a range scan request for a given topic, two hints are provided as to when scanning should stop: the number of messages scanned and the total size of messages scanned. Scanning stops whenever one of these limits is exceeded. | @cache_size@ | Sets the size of the read-ahead cache. Default is the smallest of 2G or half the heap size. | | @cache_entry_ttl@ | Sets TTL for cache entries. Each time adding new entry into the cache, those expired cache entries would be discarded. If the value is set to zero or less than zero, cache entry will not be evicted until the cache is fullfilled or the messages are already consumed. Default is 0. | | @scan_backoff_ms@ | The backoff time (in milliseconds) to retry scans after failures. Default value is 1s (1000ms). Default is 1s. | | @num_readahead_cache_threads@ | Sets the number of threads to be used for the read-ahead mechanism. Default is the number of cores as returned with a call to Runtime.getRuntime().availableProcessors().| h3. Publish and subscription parameters | @max_message_size@ | Sets the maximum message size. Default is 1.2 megabytes. | | @default_message_window_size@ | This parameter is used for setting the default maximum number of messages that can be delivered to a subscriber without being consumed. We pause delivery to a subscriber when reaching the window size. Default is unlimited (0). | | @consume_interval@ | Sets the number of messages consumed before persisting information about consumed messages. A value greater than one avoids persisting information about consumed messages upon every consumed message. Default is 50.| | @retention_secs@ | the interval to release a topic. If this parameter is greater than zero, then schedule a task to release an owned topic. Default is 0 (never released). | @messages_consumed_thread_run_interval@ | Time interval (in milliseconds) to run messages consumed timer task to delete those consumed ledgers in BookKeeper. Default is 1 minute (60,000 ms). | h3. ZooKeeper parameters | @zk_host@ | Sets the ZooKeeper list of servers. Default is localhost:2181. | | @zk_timeout@ | Sets the ZooKeeper session timeout. Default is 2s. | h3. BookKeeper parameters | @bk_ensemble_size@ | Sets the ensemble size. Default is 3. | | @bk_write_quorum_size@ | Sets the write quorum size. Default is 2. | | @bk_ack_quorum_size@ | Sets the ack quorum size. Default is 2. | bq. Note that the ack quorum size must be equal or smaller than the write quorum size. | @max_entries_per_ledger@ | Maximum number of entries before we roll a ledger. Default is unlimited (0). | h3. Metadata parameters | @zk_prefix@ | Sets the ZooKeeper path prefix. Default is _/hedwig_. | | @metadata_manager_based_topic_manager_enabled@ | Enables the use of a metadata manager for topic management. Default is false. | | @metadata_manager_factory_class@ | Sets the default factory for the metadata manager. Default is null. | h2. Region manager configuration parameters Please also refer to the configuration file that comes with the distribution: _hedwig-server/conf/hw_region_client.conf_. | @ssl_enabled@ | This parameter is a boolean flag indicating if communication with the server should be done via SSL for encryption. The Hedwig server hubs also need to be SSL enabled for this to work. Default value is false. | | @max_message_size@ | Sets the maximum message size in bytes. The default value is 2 MB (2097152). | | @max_server_redirects@ | Sets the maximum number of redirects we permit before signaling an error. Default value is 2. | | @auto_send_consume_message_enabled@ | A flag indicating whether the client library should automatically send consume messages to the server. Default value is true. | | @consumed_messages_buffer_size@ | Sets the number of messages we buffer before sending a consume message to the server. Default value is 5. | | @max_outstanding_messages@ | Support for client side throttling, sets the maximum number of outstanding messages. Default value is 10. | | @server_ack_response_timeout@ | Sets the timeout (in milliseconds) before we error out any existing requests. Default value is 30s (30,000). | bookkeeper-release-4.2.4/doc/hedwigUser.textile000066400000000000000000000121031244507361200215410ustar00rootroot00000000000000Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . h1. Design In Hedwig, clients publish messages associated with a topic, and they subscribe to a topic to receive all messages published with that topic. Clients are associated with (publish to and subscribe from) a Hedwig _instance_ (also referred to as a _region_), which consists of a number of servers called _hubs_. The hubs partition up topic ownership among themselves, and all publishes and subscribes to a topic must be done to its owning hub. When a client doesn't know the owning hub, it tries a default hub, which may redirect the client. Running a Hedwig instance requires a Zookeeper server and at least three Bookkeeper servers. An instance is designed to run within a datacenter. For wide-area messaging across datacenters, specify in the server configuration the set of default servers for each of the other instances. Dissemination among instances currently takes place over an all-to-all topology. Local subscriptions cause the hub to subscribe to all other regions on this topic, so that the local region receives all updates to it. Future work includes allowing the user to overlay alternative topologies. Because all messages on a topic go through a single hub per region, all messages within a region are ordered. This means that, for a given topic, messages are delivered in the same order to all subscribers within a region, and messages from any particular region are delivered in the same order to all subscribers globally, but messages from different regions may be delivered in different orders to different regions. Providing global ordering is prohibitively expensive in the wide area. However, in Hedwig clients such as PNUTS, the lack of global ordering is not a problem, as PNUTS serializes all updates to a table row at a single designated master for that row. Topics are independent; Hedwig provides no ordering across different topics. Version vectors are associated with each topic and serve as the identifiers for each message. Vectors consist of one component per region. A component value is the region's local sequence number on the topic, and is incremented each time a hub persists a message (published either locally or remotely) to BK. TODO: More on how version vectors are to be used, and on maintaining vector-maxes. h1. Entry Points The main class for running the server is @org.apache.hedwig.server.netty.PubSubServer@. It takes a single argument, which is a "Commons Configuration":http://commons.apache.org/configuration/ file. Currently, for configuration, the source is the documentation. See @org.apache.hedwig.server.conf.ServerConfiguration@ for server configuration parameters. The client is a library intended to be consumed by user applications. It takes a Commons Configuration object, for which the source/documentation is in @org.apache.hedwig.client.conf.ClientConfiguration@. h1. Deployment h2. Limits Because the current implementation uses a single socket per subscription, the Hedwig requires a high @ulimit@ on the number of open file descriptors. Non-root users can only use up to the limit specified in @/etc/security/limits.conf@; to raise this to 1024^2, as root, modify the "nofile" line in /etc/security/limits.conf on all hubs. h2. Running Servers Hedwig requires BookKeeper to run. For BookKeeper setup instructions see "BookKeeper Getting Started":./bookkeeperStarted.html. To start a Hedwig hub server: @hedwig-server/bin/hedwig server@ Hedwig takes its configuration from hedwig-server/conf/hw_server.conf by default. To change location of the conf file, modify the HEDWIG_SERVER_CONF environment variable. h1. Debugging You can attach an Eclipse debugger (or any debugger) to a Java process running on a remote host, as long as it has been started with the appropriate JVM flags. (See the Building Hedwig document to set up your Eclipse environment.) To launch something using @bin/hedwig@ with debugger attachment enabled, prefix the command with @HEDWIG_EXTRA_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,address=5000@, e.g.: @HEDWIG_EXTRA_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,address=5000 hedwig-server/bin/hedwig server@ h1. Logging Hedwig uses "slf4j":http://www.slf4j.org for logging, with the log4j bindings enabled by default. To enable logging from hedwig, create a log4j.properties file and point the environment variable HEDWIG_LOG_CONF to the file. The path to the log4j.properties file must be absolute. @export HEDWIG_LOG_CONF=/tmp/log4j.properties@ @hedwig-server/bin/hedwig server@ bookkeeper-release-4.2.4/doc/index.textile000066400000000000000000000042331244507361200205470ustar00rootroot00000000000000Title: BookKeeper Documentation Notice: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at . http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. h1. Apache BookKeeper documentation * "Overview":./bookkeeperOverview.html * "Getting started":./bookkeeperStarted.html * "Programmer's Guide":./bookkeeperProgrammer.html * "Bookie Server Configuration Parameters":./bookieConfigParams.html * "BookKeeper Configuration Parameters":./bookkeeperConfigParams.html * "BookKeeper Internals":./bookkeeperInternals.html * "Bookie Recovery":./bookieRecovery.html * "Using BookKeeper stream library":./bookkeeperStream.html * "BookKeeper Metadata Management":./bookkeeperMetadata.html h2. BookKeeper Admin & Ops * "Admin Guide":./bookkeeperConfig.html * "BookKeeper JMX":./bookkeeperJMX.html h1. Apache Hedwig documentation * "Building Hedwig, or how to set up Hedwig":./hedwigBuild.html * "User's Guide, or how to program against the Hedwig API and how to run it":./hedwigUser.html * "Developer's Guide, or Hedwig internals and hacking details":./hedwigDesign.html * "Configuration parameters":./hedwigParams.html * "Message Filtering":./hedwigMessageFilter.html * "Hedwig Metadata Management":./hedwigMetadata.html h2. Hedwig Admin & Ops * "Hedwig Console":./hedwigConsole.html * "Hedwig JMX":./hedwigJMX.html h1. Metastore documentation * "Metastore Interface":./metastore.textile bookkeeper-release-4.2.4/doc/metastore.textile000066400000000000000000000111421244507361200214400ustar00rootroot00000000000000Title: Metastore Interface Notice: Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at "http://www.apache.org/licenses/LICENSE-2.0":http://www.apache.org/licenses/LICENSE-2.0. . . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . . h1. Metastore Interface Although Apache BookKeeper provides "LedgerManager":./bookkeeperMetadata.html and "Hedwig Metadata Managers":./hedwigMetadata.html for users to plugin different metadata storages for both BookKeeper and Hedwig, it is quite difficult to implement a correct and efficient manager version based on the knowledge for both projects. The __MetaStore__ interface extracts the commonality of the metadata storage interfaces and is provided for users to focus on adapting the underlying storage itself w/o having to worry about the detailed logic for BookKeeper and Hedwig. h2. MetaStore The __MetaStore__ interface provide users with access to __MetastoreTable__s used for BookKeeper and Hedwig metadata management. There are two kinds of table defined in a __MetaStore__, __MetastoreTable__ which provides basic __PUT__,__GET__,__REMOVE__,__SCAN__ operations and which does not assume any ordering requirements from the underlying storage; and __MetastoreScannableTable__ which is derived from __MetastoreTable__, but *does* assume that data is stored in key order in the underlying storage. * @getName@: Return the name of the __MetaStore__. * @getVersion@: Return current __MetaStore__ plugin version. * @init@: Initialize the __MetaStore__ library with the given configuration and its version. * @close@: Close the __MetaStore__, freeing all resources. i.e. release all the open connections and occupied memory etc. * @createTable@: Create a table instance to access the data stored in it. A table name is given to locate the table. An __MetastoreTable__ object is returned. * @createScannableTable@: Similar as __createTable__, but returns __MetastoreScannableTable__ rather then __MetastoreTable__ object. If the underlying table is not an ordered table, __MetastoreException__ should be thrown. h2. MetaStore Table __MetastoreTable__ is a basic unit in a __MetaStore__, which is used to handle different types of metadata, i.e. A __MetastoreTable__ is used to store metadata for ledgers, while the other __MetastoreTable__ is used to store metadata for topic persistence info. The interface for a __MetastoreTable__ is quite simple: * @get@: Retrieve a entry by a given __key__. __OK__ and its current version in metadata storage is returned when succeed. __NoKey__ returned for a non-existent key. If __fields__ are specified, return only the specified fields for the key. * @put@: Put the given __value__ associated with __key__ with given __version__. The value is only updated when the given __version__ equals the current version in metadata storage. A new __version__ should be returned when updated successfully. __NoKey__ is returned for a non-existent key, __BadVersion__ is returned when an update is attempted with a __version__ which does not match the one in the metadata store. * @remove@: Remove the given __value__ associated with __key__. The value is only removed when the given __version__ equals its current version in metadata storage. __NoKey__ is returned for a non-existent key, __BadVersion__ is returned when remove is attempted with a __version__ which does not match. * @openCursor@: Open a __cursor__ to iterate over all the entries of a table. The returned cursor doesn't need to guarantee any order and transaction. h2. MetaStore Scannable Table __MetastoreScannableTable__ is identical to a __MetastoreTable__ except that it provides an addition interface to iterate over entries in the table in key order. * @openCursor@: Open a __cursor__ to iterate over all the entries of a table between the key range of __firstKey__ and __lastKey__. h2. How to organize your metadata. Some metadata in Hedwig and BookKeeper does not need to be stored in the order of the ledger id or the topic. You could use kind of hash table to store metadata for them. These metadata are topic ownership and topic persistence info. Besides that, subscription state and ledger metadata must be stored in key order due to the current logic in Hedwig/BookKeeper. bookkeeper-release-4.2.4/formatter.xml000066400000000000000000000721121244507361200200210ustar00rootroot00000000000000 bookkeeper-release-4.2.4/hedwig-client/000077500000000000000000000000001244507361200200145ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/conf/000077500000000000000000000000001244507361200207415ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/conf/hw_client.conf000066400000000000000000000022301244507361200235610ustar00rootroot00000000000000# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # The default Hedwig server host to contact (this ideally should be a VIP # that fronts all of the Hedwig server hubs). default_server_host=localhost:4080:9876 # This parameter is a boolean flag indicating if communication with the # server should be done via SSL for encryption. The Hedwig server hubs also # need to be SSL enabled for this to work. ssl_enabled=false bookkeeper-release-4.2.4/hedwig-client/pom.xml000066400000000000000000000104431244507361200213330ustar00rootroot00000000000000 4.0.0 org.apache.bookkeeper bookkeeper 4.2.4 org.apache.hedwig.client.App hedwig-client jar hedwig-client http://maven.apache.org com.google.guava guava ${guava.version} junit junit 4.8.1 test org.apache.bookkeeper hedwig-protocol ${project.parent.version} jar compile org.slf4j slf4j-api 1.6.4 org.slf4j slf4j-log4j12 1.6.4 test org.jboss.netty netty 3.2.4.Final compile commons-configuration commons-configuration 1.6 org.apache.zookeeper zookeeper 3.4.3 jar org.apache.bookkeeper bookkeeper-server ${project.parent.version} jar compile log4j log4j 1.2.15 provided javax.mail mail javax.jms jms com.sun.jdmk jmxtools com.sun.jmx jmxri maven-assembly-plugin 2.2.1 true org.apache.rat apache-rat-plugin 0.7 **/m4/*.m4 **/aminclude.am bookkeeper-release-4.2.4/hedwig-client/src/000077500000000000000000000000001244507361200206035ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/000077500000000000000000000000001244507361200215275ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/000077500000000000000000000000001244507361200223115ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/Makefile.am000066400000000000000000000025051244507361200243470ustar00rootroot00000000000000# # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ACLOCAL_AMFLAGS = -I m4 SUBDIRS = lib test library_includedir=$(includedir)/hedwig-0.1/hedwig library_include_HEADERS = inc/hedwig/callback.h inc/hedwig/client.h inc/hedwig/exceptions.h inc/hedwig/publish.h inc/hedwig/subscribe.h pkgconfigdir = $(libdir)/pkgconfig nodist_pkgconfig_DATA = hedwig-0.1.pc EXTRA_DIST = $(DX_CONFIG) doc/html check: cd test; make check simplesslcheck: cd test; make simplesslcheck simplecheck: cd test; make simplecheck multiplexsslcheck: cd test; make multiplexsslcheck multiplexcheck: cd test; make multiplexcheck bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/README000066400000000000000000000017501244507361200231740ustar00rootroot00000000000000= BUILDING = To build: $ libtoolize $ autoreconf -fi $ ./configure $ make The devel packages for protobuf, log4cxx & boost are required to build. = TESTING = To test, Google Test(http://code.google.com/p/googletest/) is required. The project must be configured with the location of gtest. Making with the target "check" will run all the tests. $ ./configure --enable-gtest=/home/user/src/gtest-1.6.0 $ make check To run individual tests, first start a test cluster. We provide a convenience script to do this. $ sh scripts/tester.sh start-cluster Once the cluster is running, you can run individual tests using the test harness. $ test/hedwigtest --gtest_filter=PublishTest.testAsyncPublish To get a list of tests: $ test/hedwigtest --gtest_list_tests test/hedwigtest is a libtool wrapper, which cannot be used directly with gdb. To run a test with gdb: $ libtool --mode=execute gdb test/hedwigtest (gdb) run --gtest_filter=PublishTest.testAsyncPublish bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/aminclude.am000066400000000000000000000111751244507361200245760ustar00rootroot00000000000000# Copyright (C) 2004 Oren Ben-Kiki # This file is distributed under the same terms as the Automake macro files. # Generate automatic documentation using Doxygen. Goals and variables values # are controlled by the various DX_COND_??? conditionals set by autoconf. # # The provided goals are: # doxygen-doc: Generate all doxygen documentation. # doxygen-run: Run doxygen, which will generate some of the documentation # (HTML, CHM, CHI, MAN, RTF, XML) but will not do the post # processing required for the rest of it (PS, PDF, and some MAN). # doxygen-man: Rename some doxygen generated man pages. # doxygen-ps: Generate doxygen PostScript documentation. # doxygen-pdf: Generate doxygen PDF documentation. # # Note that by default these are not integrated into the automake goals. If # doxygen is used to generate man pages, you can achieve this integration by # setting man3_MANS to the list of man pages generated and then adding the # dependency: # # $(man3_MANS): doxygen-doc # # This will cause make to run doxygen and generate all the documentation. # # The following variable is intended for use in Makefile.am: # # DX_CLEANFILES = everything to clean. # # This is usually added to MOSTLYCLEANFILES. ## --------------------------------- ## ## Format-independent Doxygen rules. ## ## --------------------------------- ## if DX_COND_doc ## ------------------------------- ## ## Rules specific for HTML output. ## ## ------------------------------- ## if DX_COND_html DX_CLEAN_HTML = @DX_DOCDIR@/html endif DX_COND_html ## ------------------------------ ## ## Rules specific for CHM output. ## ## ------------------------------ ## if DX_COND_chm DX_CLEAN_CHM = @DX_DOCDIR@/chm if DX_COND_chi DX_CLEAN_CHI = @DX_DOCDIR@/@PACKAGE@.chi endif DX_COND_chi endif DX_COND_chm ## ------------------------------ ## ## Rules specific for MAN output. ## ## ------------------------------ ## if DX_COND_man DX_CLEAN_MAN = @DX_DOCDIR@/man endif DX_COND_man ## ------------------------------ ## ## Rules specific for RTF output. ## ## ------------------------------ ## if DX_COND_rtf DX_CLEAN_RTF = @DX_DOCDIR@/rtf endif DX_COND_rtf ## ------------------------------ ## ## Rules specific for XML output. ## ## ------------------------------ ## if DX_COND_xml DX_CLEAN_XML = @DX_DOCDIR@/xml endif DX_COND_xml ## ----------------------------- ## ## Rules specific for PS output. ## ## ----------------------------- ## if DX_COND_ps DX_CLEAN_PS = @DX_DOCDIR@/@PACKAGE@.ps DX_PS_GOAL = doxygen-ps doxygen-ps: @DX_DOCDIR@/@PACKAGE@.ps @DX_DOCDIR@/@PACKAGE@.ps: @DX_DOCDIR@/@PACKAGE@.tag cd @DX_DOCDIR@/latex; \ rm -f *.aux *.toc *.idx *.ind *.ilg *.log *.out; \ $(DX_LATEX) refman.tex; \ $(MAKEINDEX_PATH) refman.idx; \ $(DX_LATEX) refman.tex; \ countdown=5; \ while $(DX_EGREP) 'Rerun (LaTeX|to get cross-references right)' \ refman.log > /dev/null 2>&1 \ && test $$countdown -gt 0; do \ $(DX_LATEX) refman.tex; \ countdown=`expr $$countdown - 1`; \ done; \ $(DX_DVIPS) -o ../@PACKAGE@.ps refman.dvi endif DX_COND_ps ## ------------------------------ ## ## Rules specific for PDF output. ## ## ------------------------------ ## if DX_COND_pdf DX_CLEAN_PDF = @DX_DOCDIR@/@PACKAGE@.pdf DX_PDF_GOAL = doxygen-pdf doxygen-pdf: @DX_DOCDIR@/@PACKAGE@.pdf @DX_DOCDIR@/@PACKAGE@.pdf: @DX_DOCDIR@/@PACKAGE@.tag cd @DX_DOCDIR@/latex; \ rm -f *.aux *.toc *.idx *.ind *.ilg *.log *.out; \ $(DX_PDFLATEX) refman.tex; \ $(DX_MAKEINDEX) refman.idx; \ $(DX_PDFLATEX) refman.tex; \ countdown=5; \ while $(DX_EGREP) 'Rerun (LaTeX|to get cross-references right)' \ refman.log > /dev/null 2>&1 \ && test $$countdown -gt 0; do \ $(DX_PDFLATEX) refman.tex; \ countdown=`expr $$countdown - 1`; \ done; \ mv refman.pdf ../@PACKAGE@.pdf endif DX_COND_pdf ## ------------------------------------------------- ## ## Rules specific for LaTeX (shared for PS and PDF). ## ## ------------------------------------------------- ## if DX_COND_latex DX_CLEAN_LATEX = @DX_DOCDIR@/latex endif DX_COND_latex .PHONY: doxygen-run doxygen-doc $(DX_PS_GOAL) $(DX_PDF_GOAL) .INTERMEDIATE: doxygen-run $(DX_PS_GOAL) $(DX_PDF_GOAL) doxygen-run: @DX_DOCDIR@/@PACKAGE@.tag doxygen-doc: doxygen-run $(DX_PS_GOAL) $(DX_PDF_GOAL) @DX_DOCDIR@/@PACKAGE@.tag: $(DX_CONFIG) $(pkginclude_HEADERS) rm -rf @DX_DOCDIR@ $(DX_ENV) $(DX_DOXYGEN) $(srcdir)/$(DX_CONFIG) DX_CLEANFILES = \ @DX_DOCDIR@/@PACKAGE@.tag \ -r \ $(DX_CLEAN_HTML) \ $(DX_CLEAN_CHM) \ $(DX_CLEAN_CHI) \ $(DX_CLEAN_MAN) \ $(DX_CLEAN_RTF) \ $(DX_CLEAN_XML) \ $(DX_CLEAN_PS) \ $(DX_CLEAN_PDF) \ $(DX_CLEAN_LATEX) endif DX_COND_doc bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/c-doc.Doxyfile000066400000000000000000001471721244507361200250170ustar00rootroot00000000000000# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Doxyfile 1.4.7 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project # # All text after a hash (#) is considered a comment and will be ignored # The format is: # TAG = value [value, ...] # For lists items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (" ") #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # The PROJECT_NAME tag is a single word (or a sequence of words surrounded # by quotes) that should identify the project. PROJECT_NAME = $(PROJECT)-$(VERSION) # The PROJECT_NUMBER tag can be used to enter a project or revision number. # This could be handy for archiving the generated documentation or # if some version control system is used. PROJECT_NUMBER = # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) # base path where the generated documentation will be put. # If a relative path is entered, it will be relative to the location # where doxygen was started. If left blank the current directory will be used. OUTPUT_DIRECTORY = $(DOCDIR) # If the CREATE_SUBDIRS tag is set to YES, then doxygen will create # 4096 sub-directories (in 2 levels) under the output directory of each output # format and will distribute the generated files over these directories. # Enabling this option can be useful when feeding doxygen a huge amount of # source files, where putting all generated files in the same directory would # otherwise cause performance problems for the file system. CREATE_SUBDIRS = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # The default language is English, other supported languages are: # Brazilian, Catalan, Chinese, Chinese-Traditional, Croatian, Czech, Danish, # Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, # Japanese-en (Japanese with English messages), Korean, Korean-en, Norwegian, # Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, # Swedish, and Ukrainian. OUTPUT_LANGUAGE = English # This tag can be used to specify the encoding used in the generated output. # The encoding is not always determined by the language that is chosen, # but also whether or not the output is meant for Windows or non-Windows users. # In case there is a difference, setting the USE_WINDOWS_ENCODING tag to YES # forces the Windows encoding (this is the default for the Windows binary), # whereas setting the tag to NO uses a Unix-style encoding (the default for # all platforms other than Windows). USE_WINDOWS_ENCODING = NO # If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will # include brief member descriptions after the members that are listed in # the file and class documentation (similar to JavaDoc). # Set to NO to disable this. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend # the brief description of a member or function before the detailed description. # Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator # that is used to form the text in various listings. Each string # in this list, if found as the leading text of the brief description, will be # stripped from the text and the result after processing the whole list, is # used as the annotated text. Otherwise, the brief description is used as-is. # If left blank, the following values are used ("$name" is automatically # replaced with the name of the entity): "The $name class" "The $name widget" # "The $name file" "is" "provides" "specifies" "contains" # "represents" "a" "an" "the" ABBREVIATE_BRIEF = # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # Doxygen will generate a detailed section even if there is only a brief # description. ALWAYS_DETAILED_SEC = NO # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full # path before files name in the file list and in the header files. If set # to NO the shortest path that makes the file name unique will be used. FULL_PATH_NAMES = YES # If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag # can be used to strip a user-defined part of the path. Stripping is # only done if one of the specified strings matches the left-hand part of # the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the # path to strip. STRIP_FROM_PATH = # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of # the path mentioned in the documentation of a class, which tells # the reader which header file to include in order to use a class. # If left blank only the name of the header file containing the class # definition is used. Otherwise one should specify the include paths that # are normally passed to the compiler using the -I flag. STRIP_FROM_INC_PATH = # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter # (but less readable) file names. This can be useful is your file systems # doesn't support long names like on DOS, Mac, or CD-ROM. SHORT_NAMES = NO # If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen # will interpret the first line (until the first dot) of a JavaDoc-style # comment as the brief description. If set to NO, the JavaDoc # comments will behave just like the Qt-style comments (thus requiring an # explicit @brief command for a brief description. JAVADOC_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen # treat a multi-line C++ special comment block (i.e. a block of //! or /// # comments) as a brief description. This used to be the default behaviour. # The new default is to treat a multi-line C++ comment block as a detailed # description. Set this tag to YES if you prefer the old behaviour instead. MULTILINE_CPP_IS_BRIEF = NO # If the DETAILS_AT_TOP tag is set to YES then Doxygen # will output the detailed description near the top, like JavaDoc. # If set to NO, the detailed description appears after the member # documentation. DETAILS_AT_TOP = NO # If the INHERIT_DOCS tag is set to YES (the default) then an undocumented # member inherits the documentation from any documented member that it # re-implements. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce # a new page for each member. If set to NO, the documentation of a member will # be part of the file/class/namespace that contains it. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. # Doxygen uses this value to replace tabs by spaces in code fragments. TAB_SIZE = 8 # This tag can be used to specify a number of aliases that acts # as commands in the documentation. An alias has the form "name=value". # For example adding "sideeffect=\par Side Effects:\n" will allow you to # put the command \sideeffect (or @sideeffect) in the documentation, which # will result in a user-defined paragraph with heading "Side Effects:". # You can put \n's in the value part of an alias to insert newlines. ALIASES = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C # sources only. Doxygen will then generate output that is more tailored for C. # For instance, some of the names that are used will be different. The list # of all members will be omitted, etc. OPTIMIZE_OUTPUT_FOR_C = YES # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java # sources only. Doxygen will then generate output that is more tailored for Java. # For instance, namespaces will be presented as packages, qualified scopes # will look different, etc. OPTIMIZE_OUTPUT_JAVA = NO # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want to # include (a tag file for) the STL sources as input, then you should # set this tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); v.s. # func(std::string) {}). This also make the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. BUILTIN_STL_SUPPORT = NO # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES, then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. DISTRIBUTE_GROUP_DOC = NO # Set the SUBGROUPING tag to YES (the default) to allow class member groups of # the same type (for instance a group of public functions) to be put as a # subgroup of that type (e.g. under the Public Functions section). Set it to # NO to prevent subgrouping. Alternatively, this can be done per class using # the \nosubgrouping command. SUBGROUPING = YES #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in # documentation are documented, even if no documentation was available. # Private class members and static file members will be hidden unless # the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES EXTRACT_ALL = NO # If the EXTRACT_PRIVATE tag is set to YES all private members of a class # will be included in the documentation. EXTRACT_PRIVATE = NO # If the EXTRACT_STATIC tag is set to YES all static members of a file # will be included in the documentation. EXTRACT_STATIC = YES # If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) # defined locally in source files will be included in the documentation. # If set to NO only classes defined in header files are included. EXTRACT_LOCAL_CLASSES = YES # This flag is only useful for Objective-C code. When set to YES local # methods, which are defined in the implementation section but not in # the interface are included in the documentation. # If set to NO (the default) only methods in the interface are included. EXTRACT_LOCAL_METHODS = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all # undocumented members of documented classes, files or namespaces. # If set to NO (the default) these members will be included in the # various overviews, but no documentation section is generated. # This option has no effect if EXTRACT_ALL is enabled. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. # If set to NO (the default) these classes will be included in the various # overviews. This option has no effect if EXTRACT_ALL is enabled. HIDE_UNDOC_CLASSES = NO # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all # friend (class|struct|union) declarations. # If set to NO (the default) these declarations will be included in the # documentation. HIDE_FRIEND_COMPOUNDS = NO # If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any # documentation blocks found inside the body of a function. # If set to NO (the default) these blocks will be appended to the # function's detailed documentation block. HIDE_IN_BODY_DOCS = NO # The INTERNAL_DOCS tag determines if documentation # that is typed after a \internal command is included. If the tag is set # to NO (the default) then the documentation will be excluded. # Set it to YES to include the internal documentation. INTERNAL_DOCS = NO # If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate # file names in lower-case letters. If set to YES upper-case letters are also # allowed. This is useful if you have classes or files whose names only differ # in case and if your file system supports case sensitive file names. Windows # and Mac users are advised to set this option to NO. CASE_SENSE_NAMES = YES # If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen # will show members with their full class and namespace scopes in the # documentation. If set to YES the scope will be hidden. HIDE_SCOPE_NAMES = NO # If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen # will put a list of the files that are included by a file in the documentation # of that file. SHOW_INCLUDE_FILES = NO # If the INLINE_INFO tag is set to YES (the default) then a tag [inline] # is inserted in the documentation for inline members. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen # will sort the (detailed) documentation of file and class members # alphabetically by member name. If set to NO the members will appear in # declaration order. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the # brief documentation of file, namespace and class members alphabetically # by member name. If set to NO (the default) the members will appear in # declaration order. SORT_BRIEF_DOCS = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be # sorted by fully-qualified names, including namespaces. If set to # NO (the default), the class list will be sorted only by class name, # not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the # alphabetical list. SORT_BY_SCOPE_NAME = NO # The GENERATE_TODOLIST tag can be used to enable (YES) or # disable (NO) the todo list. This list is created by putting \todo # commands in the documentation. GENERATE_TODOLIST = YES # The GENERATE_TESTLIST tag can be used to enable (YES) or # disable (NO) the test list. This list is created by putting \test # commands in the documentation. GENERATE_TESTLIST = YES # The GENERATE_BUGLIST tag can be used to enable (YES) or # disable (NO) the bug list. This list is created by putting \bug # commands in the documentation. GENERATE_BUGLIST = YES # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or # disable (NO) the deprecated list. This list is created by putting # \deprecated commands in the documentation. GENERATE_DEPRECATEDLIST = YES # The ENABLED_SECTIONS tag can be used to enable conditional # documentation sections, marked by \if sectionname ... \endif. ENABLED_SECTIONS = # The MAX_INITIALIZER_LINES tag determines the maximum number of lines # the initial value of a variable or define consists of for it to appear in # the documentation. If the initializer consists of more lines than specified # here it will be hidden. Use a value of 0 to hide initializers completely. # The appearance of the initializer of individual variables and defines in the # documentation can be controlled using \showinitializer or \hideinitializer # command in the documentation regardless of this setting. MAX_INITIALIZER_LINES = 30 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated # at the bottom of the documentation of classes and structs. If set to YES the # list will mention the files that were used to generate the documentation. SHOW_USED_FILES = YES # If the sources in your project are distributed over multiple directories # then setting the SHOW_DIRECTORIES tag to YES will show the directory hierarchy # in the documentation. The default is NO. SHOW_DIRECTORIES = NO # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from the # version control system). Doxygen will invoke the program by executing (via # popen()) the command , where is the value of # the FILE_VERSION_FILTER tag, and is the name of an input file # provided by doxygen. Whatever the program writes to standard output # is used as the file version. See the manual for examples. FILE_VERSION_FILTER = #--------------------------------------------------------------------------- # configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated # by doxygen. Possible values are YES and NO. If left blank NO is used. QUIET = NO # The WARNINGS tag can be used to turn on/off the warning messages that are # generated by doxygen. Possible values are YES and NO. If left blank # NO is used. WARNINGS = YES # If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings # for undocumented members. If EXTRACT_ALL is set to YES then this flag will # automatically be disabled. WARN_IF_UNDOCUMENTED = YES # If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as not documenting some # parameters in a documented function, or documenting parameters that # don't exist or using markup commands wrongly. WARN_IF_DOC_ERROR = YES # This WARN_NO_PARAMDOC option can be abled to get warnings for # functions that are documented, but have no documentation for their parameters # or return value. If set to NO (the default) doxygen will only warn about # wrong or incomplete parameter documentation, but not about the absence of # documentation. WARN_NO_PARAMDOC = NO # The WARN_FORMAT tag determines the format of the warning messages that # doxygen can produce. The string should contain the $file, $line, and $text # tags, which will be replaced by the file and line number from which the # warning originated and the warning text. Optionally the format may contain # $version, which will be replaced by the version of the file (if it could # be obtained via FILE_VERSION_FILTER) WARN_FORMAT = "$file:$line: $text" # The WARN_LOGFILE tag can be used to specify a file to which warning # and error messages should be written. If left blank the output is written # to stderr. WARN_LOGFILE = #--------------------------------------------------------------------------- # configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag can be used to specify the files and/or directories that contain # documented source files. You may enter file names like "myfile.cpp" or # directories like "/usr/src/myproject". Separate the files or directories # with spaces. INPUT = inc/ lib/ # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp # and *.h) to filter out the source-files in the directories. If left # blank the following patterns are tested: # *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx # *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm *.py FILE_PATTERNS = # The RECURSIVE tag can be used to turn specify whether or not subdirectories # should be searched for input files as well. Possible values are YES and NO. # If left blank NO is used. RECURSIVE = NO # The EXCLUDE tag can be used to specify files and/or directories that should # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. EXCLUDE = # The EXCLUDE_SYMLINKS tag can be used select whether or not files or # directories that are symbolic links (a Unix filesystem feature) are excluded # from the input. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. Note that the wildcards are matched # against the file with absolute path, so to exclude all test directories # for example use the pattern */test/* EXCLUDE_PATTERNS = # The EXAMPLE_PATH tag can be used to specify one or more files or # directories that contain example code fragments that are included (see # the \include command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp # and *.h) to filter out the source-files in the directories. If left # blank all files are included. EXAMPLE_PATTERNS = # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude # commands irrespective of the value of the RECURSIVE tag. # Possible values are YES and NO. If left blank NO is used. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or # directories that contain image that are included in the documentation (see # the \image command). IMAGE_PATH = # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command , where # is the value of the INPUT_FILTER tag, and is the name of an # input file. Doxygen will then use the output that the filter program writes # to standard output. If FILTER_PATTERNS is specified, this tag will be # ignored. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. Doxygen will compare the file name with each pattern and apply the # filter if there is a match. The filters are a list of the form: # pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further # info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER # is applied to all files. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER) will be used to filter the input files when producing source # files to browse (i.e. when SOURCE_BROWSER is set to YES). FILTER_SOURCE_FILES = NO #--------------------------------------------------------------------------- # configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will # be generated. Documented entities will be cross-referenced with these sources. # Note: To get rid of all source code in the generated output, make sure also # VERBATIM_HEADERS is set to NO. SOURCE_BROWSER = NO # Setting the INLINE_SOURCES tag to YES will include the body # of functions and classes directly in the documentation. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct # doxygen to hide any special comment blocks from generated source code # fragments. Normal C and C++ comments will always remain visible. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES (the default) # then for each documented function all documented # functions referencing it will be listed. REFERENCED_BY_RELATION = YES # If the REFERENCES_RELATION tag is set to YES (the default) # then for each documented function all documented entities # called/used by that function will be listed. REFERENCES_RELATION = YES # If the REFERENCES_LINK_SOURCE tag is set to YES (the default) # and SOURCE_BROWSER tag is set to YES, then the hyperlinks from # functions in REFERENCES_RELATION and REFERENCED_BY_RELATION lists will # link to the source code. Otherwise they will link to the documentstion. REFERENCES_LINK_SOURCE = YES # If the USE_HTAGS tag is set to YES then the references to source code # will point to the HTML generated by the htags(1) tool instead of doxygen # built-in source browser. The htags tool is part of GNU's global source # tagging system (see http://www.gnu.org/software/global/global.html). You # will need version 4.8.6 or higher. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen # will generate a verbatim copy of the header file for each class for # which an include is specified. Set to NO to disable this. VERBATIM_HEADERS = YES #--------------------------------------------------------------------------- # configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index # of all compounds will be generated. Enable this if the project # contains a lot of classes, structs, unions or interfaces. ALPHABETICAL_INDEX = NO # If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then # the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns # in which this list will be split (can be a number in the range [1..20]) COLS_IN_ALPHA_INDEX = 5 # In case all classes in a project start with a common prefix, all # classes will be put under the same header in the alphabetical index. # The IGNORE_PREFIX tag can be used to specify one or more prefixes that # should be ignored while generating the index headers. IGNORE_PREFIX = #--------------------------------------------------------------------------- # configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES (the default) Doxygen will # generate HTML output. GENERATE_HTML = $(GENERATE_HTML) # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `html' will be used as the default path. HTML_OUTPUT = html # The HTML_FILE_EXTENSION tag can be used to specify the file extension for # each generated HTML page (for example: .htm,.php,.asp). If it is left blank # doxygen will generate files with .html extension. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a personal HTML header for # each generated HTML page. If it is left blank doxygen will generate a # standard header. HTML_HEADER = # The HTML_FOOTER tag can be used to specify a personal HTML footer for # each generated HTML page. If it is left blank doxygen will generate a # standard footer. HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user-defined cascading # style sheet that is used by each HTML page. It can be used to # fine-tune the look of the HTML output. If the tag is left blank doxygen # will generate a default style sheet. Note that doxygen will try to copy # the style sheet file to the HTML output directory, so don't put your own # stylesheet in the HTML output directory as well, or it will be erased! HTML_STYLESHEET = # If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, # files or namespaces will be aligned in HTML using tables. If set to # NO a bullet list will be used. HTML_ALIGN_MEMBERS = YES # If the GENERATE_HTMLHELP tag is set to YES, additional index files # will be generated that can be used as input for tools like the # Microsoft HTML help workshop to generate a compressed HTML help file (.chm) # of the generated HTML documentation. GENERATE_HTMLHELP = $(GENERATE_HTMLHELP) # If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can # be used to specify the file name of the resulting .chm file. You # can add a path in front of the file if the result should not be # written to the html output directory. CHM_FILE = ../$(PROJECT).chm # If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can # be used to specify the location (absolute path including file name) of # the HTML help compiler (hhc.exe). If non-empty doxygen will try to run # the HTML help compiler on the generated index.hhp. HHC_LOCATION = $(HHC_PATH) # If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag # controls if a separate .chi index file is generated (YES) or that # it should be included in the master .chm file (NO). GENERATE_CHI = $(GENERATE_CHI) # If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag # controls whether a binary table of contents is generated (YES) or a # normal table of contents (NO) in the .chm file. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members # to the contents of the HTML help documentation and to the tree view. TOC_EXPAND = NO # The DISABLE_INDEX tag can be used to turn on/off the condensed index at # top of each HTML page. The value NO (the default) enables the index and # the value YES disables it. DISABLE_INDEX = NO # This tag can be used to set the number of enum values (range [1..20]) # that doxygen will group on one line in the generated HTML documentation. ENUM_VALUES_PER_LINE = 4 # If the GENERATE_TREEVIEW tag is set to YES, a side panel will be # generated containing a tree-like index structure (just like the one that # is generated for HTML Help). For this to work a browser that supports # JavaScript, DHTML, CSS and frames is required (for instance Mozilla 1.0+, # Netscape 6.0+, Internet explorer 5.0+, or Konqueror). Windows users are # probably better off using the HTML help feature. GENERATE_TREEVIEW = NO # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be # used to set the initial width (in pixels) of the frame in which the tree # is shown. TREEVIEW_WIDTH = 250 #--------------------------------------------------------------------------- # configuration options related to the LaTeX output #--------------------------------------------------------------------------- # If the GENERATE_LATEX tag is set to YES (the default) Doxygen will # generate Latex output. GENERATE_LATEX = $(GENERATE_LATEX) # The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `latex' will be used as the default path. LATEX_OUTPUT = latex # The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be # invoked. If left blank `latex' will be used as the default command name. LATEX_CMD_NAME = latex # The MAKEINDEX_CMD_NAME tag can be used to specify the command name to # generate index for LaTeX. If left blank `makeindex' will be used as the # default command name. MAKEINDEX_CMD_NAME = makeindex # If the COMPACT_LATEX tag is set to YES Doxygen generates more compact # LaTeX documents. This may be useful for small projects and may help to # save some trees in general. COMPACT_LATEX = NO # The PAPER_TYPE tag can be used to set the paper type that is used # by the printer. Possible values are: a4, a4wide, letter, legal and # executive. If left blank a4wide will be used. PAPER_TYPE = $(PAPER_SIZE) # The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX # packages that should be included in the LaTeX output. EXTRA_PACKAGES = # The LATEX_HEADER tag can be used to specify a personal LaTeX header for # the generated latex document. The header should contain everything until # the first chapter. If it is left blank doxygen will generate a # standard header. Notice: only use this tag if you know what you are doing! LATEX_HEADER = # If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated # is prepared for conversion to pdf (using ps2pdf). The pdf file will # contain links (just like the HTML output) instead of page references # This makes the output suitable for online browsing using a pdf viewer. PDF_HYPERLINKS = NO # If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of # plain latex in the generated Makefile. Set this option to YES to get a # higher quality PDF documentation. USE_PDFLATEX = $(GENERATE_PDF) # If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. # command to the generated LaTeX files. This will instruct LaTeX to keep # running if errors occur, instead of asking the user for help. # This option is also used when generating formulas in HTML. LATEX_BATCHMODE = NO # If LATEX_HIDE_INDICES is set to YES then doxygen will not # include the index chapters (such as File Index, Compound Index, etc.) # in the output. LATEX_HIDE_INDICES = NO #--------------------------------------------------------------------------- # configuration options related to the RTF output #--------------------------------------------------------------------------- # If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output # The RTF output is optimized for Word 97 and may not look very pretty with # other RTF readers or editors. GENERATE_RTF = $(GENERATE_RTF) # The RTF_OUTPUT tag is used to specify where the RTF docs will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `rtf' will be used as the default path. RTF_OUTPUT = rtf # If the COMPACT_RTF tag is set to YES Doxygen generates more compact # RTF documents. This may be useful for small projects and may help to # save some trees in general. COMPACT_RTF = NO # If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated # will contain hyperlink fields. The RTF file will # contain links (just like the HTML output) instead of page references. # This makes the output suitable for online browsing using WORD or other # programs which support those fields. # Note: wordpad (write) and others do not support links. RTF_HYPERLINKS = NO # Load stylesheet definitions from file. Syntax is similar to doxygen's # config file, i.e. a series of assignments. You only have to provide # replacements, missing definitions are set to their default value. RTF_STYLESHEET_FILE = # Set optional variables used in the generation of an rtf document. # Syntax is similar to doxygen's config file. RTF_EXTENSIONS_FILE = #--------------------------------------------------------------------------- # configuration options related to the man page output #--------------------------------------------------------------------------- # If the GENERATE_MAN tag is set to YES (the default) Doxygen will # generate man pages GENERATE_MAN = $(GENERATE_MAN) # The MAN_OUTPUT tag is used to specify where the man pages will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `man' will be used as the default path. MAN_OUTPUT = man # The MAN_EXTENSION tag determines the extension that is added to # the generated man pages (default is the subroutine's section .3) MAN_EXTENSION = .3 # If the MAN_LINKS tag is set to YES and Doxygen generates man output, # then it will generate one additional man file for each entity # documented in the real man page(s). These additional files # only source the real man page, but without them the man command # would be unable to find the correct page. The default is NO. MAN_LINKS = NO #--------------------------------------------------------------------------- # configuration options related to the XML output #--------------------------------------------------------------------------- # If the GENERATE_XML tag is set to YES Doxygen will # generate an XML file that captures the structure of # the code including all documentation. GENERATE_XML = $(GENERATE_XML) # The XML_OUTPUT tag is used to specify where the XML pages will be put. # If a relative path is entered the value of OUTPUT_DIRECTORY will be # put in front of it. If left blank `xml' will be used as the default path. XML_OUTPUT = xml # The XML_SCHEMA tag can be used to specify an XML schema, # which can be used by a validating XML parser to check the # syntax of the XML files. XML_SCHEMA = # The XML_DTD tag can be used to specify an XML DTD, # which can be used by a validating XML parser to check the # syntax of the XML files. XML_DTD = # If the XML_PROGRAMLISTING tag is set to YES Doxygen will # dump the program listings (including syntax highlighting # and cross-referencing information) to the XML output. Note that # enabling this will significantly increase the size of the XML output. XML_PROGRAMLISTING = YES #--------------------------------------------------------------------------- # configuration options for the AutoGen Definitions output #--------------------------------------------------------------------------- # If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will # generate an AutoGen Definitions (see autogen.sf.net) file # that captures the structure of the code including all # documentation. Note that this feature is still experimental # and incomplete at the moment. GENERATE_AUTOGEN_DEF = NO #--------------------------------------------------------------------------- # configuration options related to the Perl module output #--------------------------------------------------------------------------- # If the GENERATE_PERLMOD tag is set to YES Doxygen will # generate a Perl module file that captures the structure of # the code including all documentation. Note that this # feature is still experimental and incomplete at the # moment. GENERATE_PERLMOD = NO # If the PERLMOD_LATEX tag is set to YES Doxygen will generate # the necessary Makefile rules, Perl scripts and LaTeX code to be able # to generate PDF and DVI output from the Perl module output. PERLMOD_LATEX = NO # If the PERLMOD_PRETTY tag is set to YES the Perl module output will be # nicely formatted so it can be parsed by a human reader. This is useful # if you want to understand what is going on. On the other hand, if this # tag is set to NO the size of the Perl module output will be much smaller # and Perl will parse it just the same. PERLMOD_PRETTY = YES # The names of the make variables in the generated doxyrules.make file # are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. # This is useful so different doxyrules.make files included by the same # Makefile don't overwrite each other's variables. PERLMOD_MAKEVAR_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the preprocessor #--------------------------------------------------------------------------- # If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will # evaluate all C-preprocessor directives found in the sources and include # files. ENABLE_PREPROCESSING = YES # If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro # names in the source code. If set to NO (the default) only conditional # compilation will be performed. Macro expansion can be done in a controlled # way by setting EXPAND_ONLY_PREDEF to YES. MACRO_EXPANSION = NO # If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES # then the macro expansion is limited to the macros specified with the # PREDEFINED and EXPAND_AS_DEFINED tags. EXPAND_ONLY_PREDEF = NO # If the SEARCH_INCLUDES tag is set to YES (the default) the includes files # in the INCLUDE_PATH (see below) will be search if a #include is found. SEARCH_INCLUDES = YES # The INCLUDE_PATH tag can be used to specify one or more directories that # contain include files that are not input files but should be processed by # the preprocessor. INCLUDE_PATH = # You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard # patterns (like *.h and *.hpp) to filter out the header-files in the # directories. If left blank, the patterns specified with FILE_PATTERNS will # be used. INCLUDE_FILE_PATTERNS = # The PREDEFINED tag can be used to specify one or more macro names that # are defined before the preprocessor is started (similar to the -D option of # gcc). The argument of the tag is a list of macros of the form: name # or name=definition (no spaces). If the definition and the = are # omitted =1 is assumed. To prevent a macro definition from being # undefined via #undef or recursively expanded use the := operator # instead of the = operator. PREDEFINED = # If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then # this tag can be used to specify a list of macro names that should be expanded. # The macro definition that is found in the sources will be used. # Use the PREDEFINED tag if you want to use a different macro definition. EXPAND_AS_DEFINED = # If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then # doxygen's preprocessor will remove all function-like macros that are alone # on a line, have an all uppercase name, and do not end with a semicolon. Such # function macros are typically used for boiler-plate code, and will confuse # the parser if not removed. SKIP_FUNCTION_MACROS = YES #--------------------------------------------------------------------------- # Configuration::additions related to external references #--------------------------------------------------------------------------- # The TAGFILES option can be used to specify one or more tagfiles. # Optionally an initial location of the external documentation # can be added for each tagfile. The format of a tag file without # this location is as follows: # TAGFILES = file1 file2 ... # Adding location for the tag files is done as follows: # TAGFILES = file1=loc1 "file2 = loc2" ... # where "loc1" and "loc2" can be relative or absolute paths or # URLs. If a location is present for each tag, the installdox tool # does not have to be run to correct the links. # Note that each tag file must have a unique name # (where the name does NOT include the path) # If a tag file is not located in the directory in which doxygen # is run, you must also specify the path to the tagfile here. TAGFILES = # When a file name is specified after GENERATE_TAGFILE, doxygen will create # a tag file that is based on the input files it reads. GENERATE_TAGFILE = $(DOCDIR)/$(PROJECT).tag # If the ALLEXTERNALS tag is set to YES all external classes will be listed # in the class index. If set to NO only the inherited external classes # will be listed. ALLEXTERNALS = NO # If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed # in the modules index. If set to NO, only the current project's groups will # be listed. EXTERNAL_GROUPS = YES # The PERL_PATH should be the absolute path and name of the perl script # interpreter (i.e. the result of `which perl'). PERL_PATH = /usr/bin/perl #--------------------------------------------------------------------------- # Configuration options related to the dot tool #--------------------------------------------------------------------------- # If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will # generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base # or super classes. Setting the tag to NO turns the diagrams off. Note that # this option is superseded by the HAVE_DOT option below. This is only a # fallback. It is recommended to install and use dot, since it yields more # powerful graphs. CLASS_DIAGRAMS = YES # If set to YES, the inheritance and collaboration graphs will hide # inheritance and usage relations if the target is undocumented # or is not a class. HIDE_UNDOC_RELATIONS = YES # If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is # available from the path. This tool is part of Graphviz, a graph visualization # toolkit from AT&T and Lucent Bell Labs. The other options in this section # have no effect if this option is set to NO (the default) HAVE_DOT = $(HAVE_DOT) # If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen # will generate a graph for each documented class showing the direct and # indirect inheritance relations. Setting this tag to YES will force the # the CLASS_DIAGRAMS tag to NO. CLASS_GRAPH = YES # If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen # will generate a graph for each documented class showing the direct and # indirect implementation dependencies (inheritance, containment, and # class references variables) of the class with other documented classes. COLLABORATION_GRAPH = YES # If the GROUP_GRAPHS and HAVE_DOT tags are set to YES then doxygen # will generate a graph for groups, showing the direct groups dependencies GROUP_GRAPHS = YES # If the UML_LOOK tag is set to YES doxygen will generate inheritance and # collaboration diagrams in a style similar to the OMG's Unified Modeling # Language. UML_LOOK = NO # If set to YES, the inheritance and collaboration graphs will show the # relations between templates and their instances. TEMPLATE_RELATIONS = NO # If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT # tags are set to YES then doxygen will generate a graph for each documented # file showing the direct and indirect include dependencies of the file with # other documented files. INCLUDE_GRAPH = YES # If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and # HAVE_DOT tags are set to YES then doxygen will generate a graph for each # documented header file showing the documented files that directly or # indirectly include this file. INCLUDED_BY_GRAPH = YES # If the CALL_GRAPH and HAVE_DOT tags are set to YES then doxygen will # generate a call dependency graph for every global function or class method. # Note that enabling this option will significantly increase the time of a run. # So in most cases it will be better to enable call graphs for selected # functions only using the \callgraph command. CALL_GRAPH = NO # If the CALLER_GRAPH and HAVE_DOT tags are set to YES then doxygen will # generate a caller dependency graph for every global function or class method. # Note that enabling this option will significantly increase the time of a run. # So in most cases it will be better to enable caller graphs for selected # functions only using the \callergraph command. CALLER_GRAPH = NO # If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen # will graphical hierarchy of all classes instead of a textual one. GRAPHICAL_HIERARCHY = YES # If the DIRECTORY_GRAPH, SHOW_DIRECTORIES and HAVE_DOT tags are set to YES # then doxygen will show the dependencies a directory has on other directories # in a graphical way. The dependency relations are determined by the #include # relations between the files in the directories. DIRECTORY_GRAPH = YES # The DOT_IMAGE_FORMAT tag can be used to set the image format of the images # generated by dot. Possible values are png, jpg, or gif # If left blank png will be used. DOT_IMAGE_FORMAT = png # The tag DOT_PATH can be used to specify the path where the dot tool can be # found. If left blank, it is assumed the dot tool can be found in the path. DOT_PATH = $(DOT_PATH) # The DOTFILE_DIRS tag can be used to specify one or more directories that # contain dot files that are included in the documentation (see the # \dotfile command). DOTFILE_DIRS = # The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width # (in pixels) of the graphs generated by dot. If a graph becomes larger than # this value, doxygen will try to truncate the graph, so that it fits within # the specified constraint. Beware that most browsers cannot cope with very # large images. MAX_DOT_GRAPH_WIDTH = 1024 # The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height # (in pixels) of the graphs generated by dot. If a graph becomes larger than # this value, doxygen will try to truncate the graph, so that it fits within # the specified constraint. Beware that most browsers cannot cope with very # large images. MAX_DOT_GRAPH_HEIGHT = 1024 # The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the # graphs generated by dot. A depth value of 3 means that only nodes reachable # from the root by following a path via at most 3 edges will be shown. Nodes # that lay further from the root node will be omitted. Note that setting this # option to 1 or 2 may greatly reduce the computation time needed for large # code bases. Also note that a graph may be further truncated if the graph's # image dimensions are not sufficient to fit the graph (see MAX_DOT_GRAPH_WIDTH # and MAX_DOT_GRAPH_HEIGHT). If 0 is used for the depth value (the default), # the graph is not depth-constrained. MAX_DOT_GRAPH_DEPTH = 0 # Set the DOT_TRANSPARENT tag to YES to generate images with a transparent # background. This is disabled by default, which results in a white background. # Warning: Depending on the platform used, enabling this option may lead to # badly anti-aliased labels on the edges of a graph (i.e. they become hard to # read). DOT_TRANSPARENT = NO # Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output # files in one run (i.e. multiple -o and -T options on the command line). This # makes dot run faster, but since only newer versions of dot (>1.8.10) # support this, this feature is disabled by default. DOT_MULTI_TARGETS = NO # If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will # generate a legend page explaining the meaning of the various boxes and # arrows in the dot generated graphs. GENERATE_LEGEND = YES # If the DOT_CLEANUP tag is set to YES (the default) Doxygen will # remove the intermediate dot files that are used to generate # the various graphs. DOT_CLEANUP = YES #--------------------------------------------------------------------------- # Configuration::additions related to the search engine #--------------------------------------------------------------------------- # The SEARCHENGINE tag specifies whether or not a search engine should be # used. If set to NO the values of all tags below this one will be ignored. SEARCHENGINE = NO bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/configure.ac000066400000000000000000000034021244507361200245760ustar00rootroot00000000000000# # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # AC_INIT([Hedwig C++ Client], [0.1], [bookkeeper-dev@zookeeper.apache.org], [hedwig-cpp], [http://zookeeper.apache.org/bookkeeper/]) AC_PREREQ([2.59]) AM_INIT_AUTOMAKE([1.9 no-define foreign]) AC_CONFIG_HEADERS([config.h]) AC_PROG_CXX AC_LANG([C++]) AC_CONFIG_FILES([Makefile lib/Makefile test/Makefile hedwig-0.1.pc]) AC_PROG_LIBTOOL AC_CONFIG_MACRO_DIR([m4]) PKG_CHECK_MODULES([DEPS], [liblog4cxx protobuf openssl]) GTEST_LIB_CHECK([1.5.0], [AC_MSG_RESULT([GoogleTest found, Tests Enabled])], [AC_MSG_WARN([GoogleTest not found, Tests disabled])]) AX_BOOST_BASE AX_BOOST_ASIO AX_BOOST_THREAD AC_CHECK_HEADER(tr1/memory, [AC_MSG_RESULT([Found builtin TR1 library])],[ AC_CHECK_HEADER(boost/tr1/memory.hpp, [AC_DEFINE(USE_BOOST_TR1, [], [Found Boost TR1 library])], [AC_MSG_ERROR([TR1 not found, builtin TR1 or boost TR1 is required])])]) DX_HTML_FEATURE(ON) DX_INIT_DOXYGEN(hedwig-c++, c-doc.Doxyfile, doc) CXXFLAGS="$CXXFLAGS -Wall" AC_OUTPUT bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/hedwig-0.1.pc.in000066400000000000000000000020311244507361200250010ustar00rootroot00000000000000# # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # prefix=@prefix@ exec_prefix=@exec_prefix@ libdir=@libdir@ includedir=@includedir@ Name: something Description: Some library. Requires: Version: @PACKAGE_VERSION@ Libs: -L${libdir} -lhedwig01 Cflags: -I${includedir}/hedwig-0.1 -I${libdir}/hedwig-0.1/include bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/inc/000077500000000000000000000000001244507361200230625ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/inc/hedwig/000077500000000000000000000000001244507361200243315ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/inc/hedwig/callback.h000066400000000000000000000054041244507361200262410ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef HEDWIG_CALLBACK_H #define HEDWIG_CALLBACK_H #include #include #include #ifdef USE_BOOST_TR1 #include #else #include #endif namespace Hedwig { // // A Listener registered for a Subscriber instance to emit events // for those disable resubscribe subscriptions. // class SubscriptionListener { public: virtual void processEvent(const std::string &topic, const std::string &subscriberId, const Hedwig::SubscriptionEvent event) = 0; virtual ~SubscriptionListener() {}; }; typedef std::tr1::shared_ptr SubscriptionListenerPtr; template class Callback { public: virtual void operationComplete(const R& result) = 0; virtual void operationFailed(const std::exception& exception) = 0; virtual ~Callback() {}; }; class OperationCallback { public: virtual void operationComplete() = 0; virtual void operationFailed(const std::exception& exception) = 0; virtual ~OperationCallback() {}; }; typedef std::tr1::shared_ptr OperationCallbackPtr; class MessageHandlerCallback { public: virtual void consume(const std::string& topic, const std::string& subscriberId, const Message& msg, OperationCallbackPtr& callback) = 0; virtual ~MessageHandlerCallback() {}; }; typedef std::tr1::shared_ptr MessageHandlerCallbackPtr; typedef std::tr1::shared_ptr SubscriptionPreferencesPtr; class ClientMessageFilter { public: virtual void setSubscriptionPreferences(const std::string& topic, const std::string& subscriberId, const SubscriptionPreferencesPtr& preferences) = 0; virtual bool testMessage(const Message& message) = 0; virtual ~ClientMessageFilter() {}; }; typedef std::tr1::shared_ptr ClientMessageFilterPtr; } #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/inc/hedwig/client.h000066400000000000000000000057661244507361200257760ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef HEDWIG_CLIENT_H #define HEDWIG_CLIENT_H #include #ifdef USE_BOOST_TR1 #include #else #include #endif #include #include #include #include #include namespace Hedwig { class ClientImpl; typedef boost::shared_ptr ClientImplPtr; class Configuration { public: static const std::string DEFAULT_SERVER; static const std::string MESSAGE_CONSUME_RETRY_WAIT_TIME; static const std::string SUBSCRIBER_CONSUME_RETRY_WAIT_TIME; static const std::string MAX_MESSAGE_QUEUE_SIZE; static const std::string RECONNECT_SUBSCRIBE_RETRY_WAIT_TIME; static const std::string SYNC_REQUEST_TIMEOUT; static const std::string SUBSCRIBER_AUTOCONSUME; static const std::string NUM_DISPATCH_THREADS; static const std::string SSL_ENABLED; static const std::string SSL_PEM_FILE; static const std::string SUBSCRIPTION_CHANNEL_SHARING_ENABLED; /** * The maximum number of messages the hub will queue for subscriptions * created using this configuration. The hub will always queue the most * recent messages. If there are enough publishes to the topic to hit * the bound, then the oldest messages are dropped from the queue. * * A bound of 0 disabled the bound completely. */ static const std::string SUBSCRIPTION_MESSAGE_BOUND; public: Configuration() {}; virtual int getInt(const std::string& key, int defaultVal) const = 0; virtual const std::string get(const std::string& key, const std::string& defaultVal) const = 0; virtual bool getBool(const std::string& key, bool defaultVal) const = 0; virtual ~Configuration() {} }; /** Main Hedwig client class. This class is used to acquire an instance of the Subscriber of Publisher. */ class Client : private boost::noncopyable { public: Client(const Configuration& conf); /** Retrieve the subscriber object */ Subscriber& getSubscriber(); /** Retrieve the publisher object */ Publisher& getPublisher(); ~Client(); private: ClientImplPtr clientimpl; }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/inc/hedwig/exceptions.h000066400000000000000000000044611244507361200266700ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef HEDWIG_EXCEPTION_H #define HEDWIG_EXCEPTION_H #include namespace Hedwig { class ClientException : public std::exception { }; class ClientTimeoutException : public ClientException {}; class ServiceDownException : public ClientException {}; class CannotConnectException : public ClientException {}; class UnexpectedResponseException : public ClientException {}; class OomException : public ClientException {}; class UnknownRequestException : public ClientException {}; class InvalidRedirectException : public ClientException {}; class NoChannelHandlerException : public ClientException {}; class PublisherException : public ClientException { }; class SubscriberException : public ClientException { }; class AlreadySubscribedException : public SubscriberException {}; class NotSubscribedException : public SubscriberException {}; class ResubscribeException : public SubscriberException {}; class NullMessageHandlerException : public SubscriberException {}; class NullMessageFilterException : public SubscriberException {}; class AlreadyStartDeliveryException : public SubscriberException {}; class StartingDeliveryException : public SubscriberException {}; class ConfigurationException : public ClientException { }; class InvalidPortException : public ConfigurationException {}; class HostResolutionException : public ClientException {}; class InvalidStateException : public ClientException {}; class ShuttingDownException : public InvalidStateException {}; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/inc/hedwig/publish.h000066400000000000000000000051371244507361200261560ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef HEDWIG_PUBLISH_H #define HEDWIG_PUBLISH_H #include #include #include #include #include namespace Hedwig { typedef std::tr1::shared_ptr PublishResponsePtr; typedef Callback PublishResponseCallback; typedef std::tr1::shared_ptr PublishResponseCallbackPtr; /** Interface for publishing to a hedwig instance. */ class Publisher : private boost::noncopyable { public: /** Publish message for topic, and block until we receive a ACK response from the hedwig server. @param topic Topic to publish to. @param message Data to publish for topic. */ virtual PublishResponsePtr publish(const std::string& topic, const std::string& message) = 0; virtual PublishResponsePtr publish(const std::string& topic, const Message& message) = 0; /** Asynchronously publish message for topic. @code OperationCallbackPtr callback(new MyCallback()); pub.asyncPublish(callback); @endcode @param topic Topic to publish to. @param message Data to publish to topic @param callback Callback which will be used to report success or failure. Success is only reported once the server replies with an ACK response to the publication. */ virtual void asyncPublish(const std::string& topic, const std::string& message, const OperationCallbackPtr& callback) = 0; virtual void asyncPublish(const std::string& topic, const Message& message, const OperationCallbackPtr& callback) = 0; virtual void asyncPublishWithResponse(const std::string& topic, const Message& messsage, const PublishResponseCallbackPtr& callback) = 0; virtual ~Publisher() {} }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/inc/hedwig/subscribe.h000066400000000000000000000066101244507361200264660ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef HEDWIG_SUBSCRIBE_H #define HEDWIG_SUBSCRIBE_H #include #include #include #include #include namespace Hedwig { /** Interface for subscribing to a hedwig instance. */ class Subscriber : private boost::noncopyable { public: virtual void subscribe(const std::string& topic, const std::string& subscriberId, const SubscribeRequest::CreateOrAttach mode) = 0; virtual void asyncSubscribe(const std::string& topic, const std::string& subscriberId, const SubscribeRequest::CreateOrAttach mode, const OperationCallbackPtr& callback) = 0; virtual void subscribe(const std::string& topic, const std::string& subscriberId, const SubscriptionOptions& options) = 0; virtual void asyncSubscribe(const std::string& topic, const std::string& subscriberId, const SubscriptionOptions& options, const OperationCallbackPtr& callback) = 0; virtual void unsubscribe(const std::string& topic, const std::string& subscriberId) = 0; virtual void asyncUnsubscribe(const std::string& topic, const std::string& subscriberId, const OperationCallbackPtr& callback) = 0; virtual void consume(const std::string& topic, const std::string& subscriberId, const MessageSeqId& messageSeqId) = 0; virtual void startDelivery(const std::string& topic, const std::string& subscriberId, const MessageHandlerCallbackPtr& callback) = 0; virtual void startDeliveryWithFilter(const std::string& topic, const std::string& subscriberId, const MessageHandlerCallbackPtr& callback, const ClientMessageFilterPtr& filter) = 0; virtual void stopDelivery(const std::string& topic, const std::string& subscriberId) = 0; virtual bool hasSubscription(const std::string& topic, const std::string& subscriberId) = 0; virtual void closeSubscription(const std::string& topic, const std::string& subscriberId) = 0; virtual void asyncCloseSubscription(const std::string& topic, const std::string& subscriberId, const OperationCallbackPtr& callback) = 0; // // API to register/unregister subscription listeners for receiving // events indicating subscription changes for those disable resubscribe // subscriptions // virtual void addSubscriptionListener(SubscriptionListenerPtr& listener) = 0; virtual void removeSubscriptionListener(SubscriptionListenerPtr& listener) = 0; virtual ~Subscriber() {} }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/000077500000000000000000000000001244507361200230575ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/Makefile.am000066400000000000000000000031021244507361200251070ustar00rootroot00000000000000# # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # PROTODEF = ../../../../../hedwig-protocol/src/main/protobuf/PubSubProtocol.proto lib_LTLIBRARIES = libhedwig01.la libhedwig01_la_SOURCES = protocol.cpp channel.cpp client.cpp util.cpp clientimpl.cpp publisherimpl.cpp subscriberimpl.cpp eventdispatcher.cpp data.cpp filterablemessagehandler.cpp simplesubscriberimpl.cpp multiplexsubscriberimpl.cpp libhedwig01_la_CPPFLAGS = -I$(top_srcdir)/inc $(DEPS_CFLAGS) libhedwig01_la_LIBADD = $(DEPS_LIBS) $(BOOST_CPPFLAGS) libhedwig01_la_LDFLAGS = -no-undefined $(BOOST_ASIO_LIB) $(BOOST_LDFLAGS) $(BOOST_THREAD_LIB) protocol.cpp: $(PROTODEF) protoc --cpp_out=. -I`dirname $(PROTODEF)` $(PROTODEF) sed "s/PubSubProtocol.pb.h/hedwig\/protocol.h/" PubSubProtocol.pb.cc > protocol.cpp rm PubSubProtocol.pb.cc mv PubSubProtocol.pb.h $(top_srcdir)/inc/hedwig/protocol.h bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/channel.cpp000066400000000000000000000653071244507361200252060ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "channel.h" #include "util.h" #include "clientimpl.h" #include #include static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; const std::string DEFAULT_SSL_PEM_FILE = ""; AbstractDuplexChannel::AbstractDuplexChannel(IOServicePtr& service, const HostAddress& addr, const ChannelHandlerPtr& handler) : address(addr), handler(handler), service(service->getService()), instream(&in_buf), copy_buf(NULL), copy_buf_length(0), state(UNINITIALISED), receiving(false), reading(false), sending(false), closed(false) {} AbstractDuplexChannel::~AbstractDuplexChannel() { free(copy_buf); copy_buf = NULL; copy_buf_length = 0; LOG4CXX_INFO(logger, "Destroying DuplexChannel(" << this << ")"); } ChannelHandlerPtr AbstractDuplexChannel::getChannelHandler() { return handler; } /*static*/ void AbstractDuplexChannel::connectCallbackHandler( AbstractDuplexChannelPtr channel, OperationCallbackPtr callback, const boost::system::error_code& error) { channel->doAfterConnect(callback, error); } void AbstractDuplexChannel::connect() { connect(OperationCallbackPtr()); } void AbstractDuplexChannel::connect(const OperationCallbackPtr& callback) { setState(CONNECTING); doConnect(callback); } void AbstractDuplexChannel::doAfterConnect(const OperationCallbackPtr& callback, const boost::system::error_code& error) { if (error) { LOG4CXX_ERROR(logger, "Channel " << this << " connect error : " << error.message().c_str()); channelConnectFailed(ChannelConnectException(), callback); return; } // set no delay option boost::system::error_code ec; setSocketOption(ec); if (ec) { LOG4CXX_WARN(logger, "Channel " << this << " set up socket error : " << ec.message().c_str()); channelConnectFailed(ChannelSetupException(), callback); return; } boost::asio::ip::tcp::endpoint localEp; boost::asio::ip::tcp::endpoint remoteEp; localEp = getLocalAddress(ec); remoteEp = getRemoteAddress(ec); if (!ec) { LOG4CXX_INFO(logger, "Channel " << this << " connected :" << localEp.address().to_string() << ":" << localEp.port() << "=>" << remoteEp.address().to_string() << ":" << remoteEp.port()); // update ip address since if might connect to VIP address.updateIP(remoteEp.address().to_v4().to_ulong()); } // the channel is connected channelConnected(callback); } void AbstractDuplexChannel::channelConnectFailed(const std::exception& e, const OperationCallbackPtr& callback) { channelDisconnected(e); setState(DEAD); if (callback.get()) { callback->operationFailed(e); } } void AbstractDuplexChannel::channelConnected(const OperationCallbackPtr& callback) { // for normal channel, we have done here setState(CONNECTED); if (callback.get()) { callback->operationComplete(); } // enable sending & receiving startSending(); startReceiving(); } /*static*/ void AbstractDuplexChannel::messageReadCallbackHandler( AbstractDuplexChannelPtr channel, std::size_t message_size, const boost::system::error_code& error, std::size_t bytes_transferred) { LOG4CXX_DEBUG(logger, "DuplexChannel::messageReadCallbackHandler " << error << ", " << bytes_transferred << " channel(" << channel.get() << ")"); if (error) { if (!channel->isClosed()) { LOG4CXX_INFO(logger, "Invalid read error (" << error << ") bytes_transferred (" << bytes_transferred << ") channel(" << channel.get() << ")"); } channel->channelDisconnected(ChannelReadException()); return; } if (channel->copy_buf_length < message_size) { channel->copy_buf_length = message_size; channel->copy_buf = (char*)realloc(channel->copy_buf, channel->copy_buf_length); if (channel->copy_buf == NULL) { LOG4CXX_ERROR(logger, "Error allocating buffer. channel(" << channel.get() << ")"); // if failed to realloc memory, we should disconnect the channel. // then it would enter disconnect logic, which would close channel and release // its resources includes the copy_buf memory. channel->channelDisconnected(ChannelOutOfMemoryException()); return; } } channel->instream.read(channel->copy_buf, message_size); PubSubResponsePtr response(new PubSubResponse()); bool err = response->ParseFromArray(channel->copy_buf, message_size); if (!err) { LOG4CXX_ERROR(logger, "Error parsing message. channel(" << channel.get() << ")"); channel->channelDisconnected(ChannelReadException()); return; } else { LOG4CXX_DEBUG(logger, "channel(" << channel.get() << ") : " << channel->in_buf.size() << " bytes left in buffer"); } ChannelHandlerPtr h; { boost::shared_lock lock(channel->destruction_lock); if (channel->handler.get()) { h = channel->handler; } } // channel did stopReceiving, we should not call #messageReceived // store this response in outstanding_response variable and did stop receiving // when we startReceiving again, we can process this last response. { boost::lock_guard lock(channel->receiving_lock); if (!channel->isReceiving()) { // queue the response channel->outstanding_response = response; channel->reading = false; return; } } // channel is still in receiving status if (h.get()) { h->messageReceived(channel, response); } AbstractDuplexChannel::readSize(channel); } /*static*/ void AbstractDuplexChannel::sizeReadCallbackHandler( AbstractDuplexChannelPtr channel, const boost::system::error_code& error, std::size_t bytes_transferred) { LOG4CXX_DEBUG(logger, "DuplexChannel::sizeReadCallbackHandler " << error << ", " << bytes_transferred << " channel(" << channel.get() << ")"); if (error) { if (!channel->isClosed()) { LOG4CXX_INFO(logger, "Invalid read error (" << error << ") bytes_transferred (" << bytes_transferred << ") channel(" << channel.get() << ")"); } channel->channelDisconnected(ChannelReadException()); return; } if (channel->in_buf.size() < sizeof(uint32_t)) { LOG4CXX_ERROR(logger, "Not enough data in stream. Must have been an error reading. " << " Closing channel(" << channel.get() << ")"); channel->channelDisconnected(ChannelReadException()); return; } uint32_t size; std::istream is(&channel->in_buf); is.read((char*)&size, sizeof(uint32_t)); size = ntohl(size); int toread = size - channel->in_buf.size(); LOG4CXX_DEBUG(logger, " size of incoming message " << size << ", currently in buffer " << channel->in_buf.size() << " channel(" << channel.get() << ")"); if (toread <= 0) { AbstractDuplexChannel::messageReadCallbackHandler(channel, size, error, 0); } else { channel->readMsgBody(channel->in_buf, toread, size); } } /*static*/ void AbstractDuplexChannel::readSize(AbstractDuplexChannelPtr channel) { int toread = sizeof(uint32_t) - channel->in_buf.size(); LOG4CXX_DEBUG(logger, " size of incoming message " << sizeof(uint32_t) << ", currently in buffer " << channel->in_buf.size() << " channel(" << channel.get() << ")"); if (toread < 0) { AbstractDuplexChannel::sizeReadCallbackHandler(channel, boost::system::error_code(), 0); } else { channel->readMsgSize(channel->in_buf); } } void AbstractDuplexChannel::startReceiving() { LOG4CXX_DEBUG(logger, "DuplexChannel::startReceiving channel(" << this << ") currently receiving = " << receiving); PubSubResponsePtr response; bool inReadingState; { boost::lock_guard lock(receiving_lock); // receiving before just return if (receiving) { return; } receiving = true; // if we have last response collected in previous startReceiving // we need to process it, but we should process it under receiving_lock // otherwise we enter dead lock // subscriber#startDelivery(subscriber#queue_lock) => // channel#startReceiving(channel#receiving_lock) => // sbuscriber#messageReceived(subscriber#queue_lock) if (outstanding_response.get()) { response = outstanding_response; outstanding_response = PubSubResponsePtr(); } // if channel is in reading status wait data from remote server // we don't need to insert another readSize op inReadingState = reading; if (!reading) { reading = true; } } // consume message buffered in receiving queue // there is at most one message buffered when we // stopReceiving between #readSize and #readMsgBody if (response.get()) { ChannelHandlerPtr h; { boost::shared_lock lock(this->destruction_lock); if (this->handler.get()) { h = this->handler; } } if (h.get()) { h->messageReceived(shared_from_this(), response); } } // if channel is not in reading state, #readSize if (!inReadingState) { AbstractDuplexChannel::readSize(shared_from_this()); } } bool AbstractDuplexChannel::isReceiving() { return receiving; } bool AbstractDuplexChannel::isClosed() { return closed; } void AbstractDuplexChannel::stopReceiving() { LOG4CXX_DEBUG(logger, "DuplexChannel::stopReceiving channel(" << this << ")"); boost::lock_guard lock(receiving_lock); receiving = false; } void AbstractDuplexChannel::startSending() { { boost::shared_lock lock(state_lock); if (state != CONNECTED) { return; } } boost::lock_guard lock(sending_lock); if (sending) { return; } LOG4CXX_DEBUG(logger, "AbstractDuplexChannel::startSending channel(" << this << ")"); WriteRequest w; { boost::lock_guard lock(write_lock); if (write_queue.empty()) { return; } w = write_queue.front(); write_queue.pop_front(); } sending = true; std::ostream os(&out_buf); uint32_t size = htonl(w.first->ByteSize()); os.write((char*)&size, sizeof(uint32_t)); bool err = w.first->SerializeToOstream(&os); if (!err) { w.second->operationFailed(ChannelWriteException()); channelDisconnected(ChannelWriteException()); return; } writeBuffer(out_buf, w.second); } const HostAddress& AbstractDuplexChannel::getHostAddress() const { return address; } void AbstractDuplexChannel::channelDisconnected(const std::exception& e) { setState(DEAD); { boost::lock_guard lock(write_lock); while (!write_queue.empty()) { WriteRequest w = write_queue.front(); write_queue.pop_front(); w.second->operationFailed(e); } } ChannelHandlerPtr h; { boost::shared_lock lock(destruction_lock); if (handler.get()) { h = handler; } } if (h.get()) { h->channelDisconnected(shared_from_this(), e); } } void AbstractDuplexChannel::close() { { boost::shared_lock statelock(state_lock); state = DEAD; } { boost::lock_guard lock(destruction_lock); if (closed) { // some one has closed the socket. return; } closed = true; handler = ChannelHandlerPtr(); // clear the handler in case it ever referenced the channel*/ } LOG4CXX_INFO(logger, "Killing duplex channel (" << this << ")"); // If we are going away, fail all transactions that haven't been completed failAllTransactions(); closeSocket(); } /*static*/ void AbstractDuplexChannel::writeCallbackHandler( AbstractDuplexChannelPtr channel, OperationCallbackPtr callback, const boost::system::error_code& error, std::size_t bytes_transferred) { if (error) { if (!channel->isClosed()) { LOG4CXX_DEBUG(logger, "AbstractDuplexChannel::writeCallbackHandler " << error << ", " << bytes_transferred << " channel(" << channel.get() << ")"); } callback->operationFailed(ChannelWriteException()); channel->channelDisconnected(ChannelWriteException()); return; } callback->operationComplete(); channel->out_buf.consume(bytes_transferred); { boost::lock_guard lock(channel->sending_lock); channel->sending = false; } channel->startSending(); } void AbstractDuplexChannel::writeRequest(const PubSubRequestPtr& m, const OperationCallbackPtr& callback) { { boost::shared_lock lock(state_lock); if (state != CONNECTED && state != CONNECTING) { LOG4CXX_ERROR(logger,"Tried to write transaction [" << m->txnid() << "] to a channel [" << this << "] which is " << (state == DEAD ? "DEAD" : "UNINITIALISED")); callback->operationFailed(UninitialisedChannelException()); return; } } { boost::lock_guard lock(write_lock); WriteRequest w(m, callback); write_queue.push_back(w); } startSending(); } // // Transaction operations // /** Store the transaction data for a request. */ void AbstractDuplexChannel::storeTransaction(const PubSubDataPtr& data) { LOG4CXX_DEBUG(logger, "Storing txnid(" << data->getTxnId() << ") for channel(" << this << ")"); boost::lock_guard lock(txnid2data_lock); txnid2data[data->getTxnId()] = data; } /** Give the transaction back to the caller. */ PubSubDataPtr AbstractDuplexChannel::retrieveTransaction(long txnid) { boost::lock_guard lock(txnid2data_lock); PubSubDataPtr data = txnid2data[txnid]; txnid2data.erase(txnid); if (data == NULL) { LOG4CXX_ERROR(logger, "Transaction txnid(" << txnid << ") doesn't exist in channel (" << this << ")"); } return data; } void AbstractDuplexChannel::failAllTransactions() { boost::lock_guard lock(txnid2data_lock); for (TransactionMap::iterator iter = txnid2data.begin(); iter != txnid2data.end(); ++iter) { PubSubDataPtr& data = (*iter).second; data->getCallback()->operationFailed(ChannelDiedException()); } txnid2data.clear(); } // Set state for the channel void AbstractDuplexChannel::setState(State s) { boost::lock_guard lock(state_lock); state = s; } // // Basic Asio Channel Implementation // AsioDuplexChannel::AsioDuplexChannel(IOServicePtr& service, const HostAddress& addr, const ChannelHandlerPtr& handler) : AbstractDuplexChannel(service, addr, handler) { this->socket = boost_socket_ptr(new boost_socket(getService())); LOG4CXX_DEBUG(logger, "Creating DuplexChannel(" << this << ")"); } AsioDuplexChannel::~AsioDuplexChannel() { } void AsioDuplexChannel::doConnect(const OperationCallbackPtr& callback) { boost::system::error_code error = boost::asio::error::host_not_found; uint32_t ip2conn = address.ip(); uint16_t port2conn = address.port(); boost::asio::ip::tcp::endpoint endp(boost::asio::ip::address_v4(ip2conn), port2conn); socket->async_connect(endp, boost::bind(&AbstractDuplexChannel::connectCallbackHandler, shared_from_this(), callback, boost::asio::placeholders::error)); LOG4CXX_INFO(logger, "Channel (" << this << ") fire connect operation to ip (" << ip2conn << ") port (" << port2conn << ")"); } void AsioDuplexChannel::setSocketOption(boost::system::error_code& ec) { boost::asio::ip::tcp::no_delay option(true); socket->set_option(option, ec); } boost::asio::ip::tcp::endpoint AsioDuplexChannel::getLocalAddress( boost::system::error_code& ec) { return socket->local_endpoint(ec); } boost::asio::ip::tcp::endpoint AsioDuplexChannel::getRemoteAddress( boost::system::error_code& ec) { return socket->remote_endpoint(ec); } void AsioDuplexChannel::writeBuffer(boost::asio::streambuf& buffer, const OperationCallbackPtr& callback) { boost::asio::async_write(*socket, buffer, boost::bind(&AbstractDuplexChannel::writeCallbackHandler, shared_from_this(), callback, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred) ); } void AsioDuplexChannel::readMsgSize(boost::asio::streambuf& buffer) { boost::asio::async_read(*socket, buffer, boost::asio::transfer_at_least(sizeof(uint32_t)), boost::bind(&AbstractDuplexChannel::sizeReadCallbackHandler, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void AsioDuplexChannel::readMsgBody(boost::asio::streambuf& buffer, int toread, uint32_t msgSize) { boost::asio::async_read(*socket, buffer, boost::asio::transfer_at_least(toread), boost::bind(&AbstractDuplexChannel::messageReadCallbackHandler, shared_from_this(), msgSize, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void AsioDuplexChannel::closeSocket() { boost::system::error_code ec; socket->cancel(ec); if (ec) { LOG4CXX_WARN(logger, "Channel " << this << " canceling io error : " << ec.message().c_str()); } socket->shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec); if (ec) { LOG4CXX_WARN(logger, "Channel " << this << " shutdown error : " << ec.message().c_str()); } socket->close(ec); if (ec) { LOG4CXX_WARN(logger, "Channel " << this << " close error : " << ec.message().c_str()); } LOG4CXX_DEBUG(logger, "Closed socket for channel " << this << "."); } // SSL Context Factory SSLContextFactory::SSLContextFactory(const Configuration& conf) : conf(conf), sslPemFile(conf.get(Configuration::SSL_PEM_FILE, DEFAULT_SSL_PEM_FILE)) { } SSLContextFactory::~SSLContextFactory() {} boost_ssl_context_ptr SSLContextFactory::createSSLContext(boost::asio::io_service& service) { boost_ssl_context_ptr sslCtx(new boost_ssl_context(service, boost::asio::ssl::context::sslv23_client)); sslCtx->set_verify_mode(boost::asio::ssl::context::verify_none); if (!sslPemFile.empty()) { boost::system::error_code err; sslCtx->load_verify_file(sslPemFile, err); if (err) { LOG4CXX_ERROR(logger, "Failed to load verify ssl pem file : " << sslPemFile); throw InvalidSSLPermFileException(); } } return sslCtx; } // // SSL Channl Implementation // #ifndef __APPLE__ AsioSSLDuplexChannel::AsioSSLDuplexChannel(IOServicePtr& service, const boost_ssl_context_ptr& sslCtx, const HostAddress& addr, const ChannelHandlerPtr& handler) : AbstractDuplexChannel(service, addr, handler), ssl_ctx(sslCtx), sslclosed(false) { #else AsioSSLDuplexChannel::AsioSSLDuplexChannel(IOServicePtr& service, const boost_ssl_context_ptr& sslCtx, const HostAddress& addr, const ChannelHandlerPtr& handler) : AbstractDuplexChannel(service, addr, handler), ssl_ctx(sslCtx) { #endif ssl_socket = boost_ssl_socket_ptr(new boost_ssl_socket(getService(), *ssl_ctx)); LOG4CXX_DEBUG(logger, "Created SSL DuplexChannel(" << this << ")"); } AsioSSLDuplexChannel::~AsioSSLDuplexChannel() { } void AsioSSLDuplexChannel::doConnect(const OperationCallbackPtr& callback) { boost::system::error_code error = boost::asio::error::host_not_found; uint32_t ip2conn = address.ip(); uint16_t port2conn = address.sslPort(); boost::asio::ip::tcp::endpoint endp(boost::asio::ip::address_v4(ip2conn), port2conn); ssl_socket->lowest_layer().async_connect(endp, boost::bind(&AbstractDuplexChannel::connectCallbackHandler, shared_from_this(), callback, boost::asio::placeholders::error)); LOG4CXX_INFO(logger, "SSL Channel (" << this << ") fire connect operation to ip (" << ip2conn << ") port (" << port2conn << ")"); } void AsioSSLDuplexChannel::setSocketOption(boost::system::error_code& ec) { boost::asio::ip::tcp::no_delay option(true); ssl_socket->lowest_layer().set_option(option, ec); } boost::asio::ip::tcp::endpoint AsioSSLDuplexChannel::getLocalAddress( boost::system::error_code& ec) { return ssl_socket->lowest_layer().local_endpoint(ec); } boost::asio::ip::tcp::endpoint AsioSSLDuplexChannel::getRemoteAddress( boost::system::error_code& ec) { return ssl_socket->lowest_layer().remote_endpoint(ec); } void AsioSSLDuplexChannel::channelConnected(const OperationCallbackPtr& callback) { // for SSL channel, we had to do SSL hand shake startHandShake(callback); LOG4CXX_INFO(logger, "SSL Channel " << this << " fire hand shake operation"); } void AsioSSLDuplexChannel::sslChannelConnected(const OperationCallbackPtr& callback) { LOG4CXX_INFO(logger, "SSL Channel " << this << " hand shake finish!!"); AbstractDuplexChannel::channelConnected(callback); } void AsioSSLDuplexChannel::startHandShake(const OperationCallbackPtr& callback) { ssl_socket->async_handshake(boost::asio::ssl::stream_base::client, boost::bind(&AsioSSLDuplexChannel::handleHandshake, boost::dynamic_pointer_cast(shared_from_this()), callback, boost::asio::placeholders::error)); } void AsioSSLDuplexChannel::handleHandshake(AsioSSLDuplexChannelPtr channel, OperationCallbackPtr callback, const boost::system::error_code& error) { if (error) { LOG4CXX_ERROR(logger, "SSL Channel " << channel.get() << " hand shake error : " << error.message().c_str()); channel->channelConnectFailed(ChannelConnectException(), callback); return; } channel->sslChannelConnected(callback); } void AsioSSLDuplexChannel::writeBuffer(boost::asio::streambuf& buffer, const OperationCallbackPtr& callback) { boost::asio::async_write(*ssl_socket, buffer, boost::bind(&AbstractDuplexChannel::writeCallbackHandler, shared_from_this(), callback, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred) ); } void AsioSSLDuplexChannel::readMsgSize(boost::asio::streambuf& buffer) { boost::asio::async_read(*ssl_socket, buffer, boost::asio::transfer_at_least(sizeof(uint32_t)), boost::bind(&AbstractDuplexChannel::sizeReadCallbackHandler, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void AsioSSLDuplexChannel::readMsgBody(boost::asio::streambuf& buffer, int toread, uint32_t msgSize) { boost::asio::async_read(*ssl_socket, buffer, boost::asio::transfer_at_least(toread), boost::bind(&AbstractDuplexChannel::messageReadCallbackHandler, shared_from_this(), msgSize, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } #ifndef __APPLE__ // boost asio doesn't provide time out mechanism to shutdown ssl void AsioSSLDuplexChannel::sslShutdown() { ssl_socket->async_shutdown(boost::bind(&AsioSSLDuplexChannel::handleSSLShutdown, boost::shared_dynamic_cast(shared_from_this()), boost::asio::placeholders::error)); } void AsioSSLDuplexChannel::handleSSLShutdown(const boost::system::error_code& error) { if (error) { LOG4CXX_ERROR(logger, "SSL Channel " << this << " shutdown error : " << error.message().c_str()); } { boost::lock_guard lock(sslclosed_lock); sslclosed = true; } sslclosed_cond.notify_all(); } #endif void AsioSSLDuplexChannel::closeSocket() { #ifndef __APPLE__ // Shutdown ssl sslShutdown(); // time wait { boost::mutex::scoped_lock lock(sslclosed_lock); if (!sslclosed) { sslclosed_cond.timed_wait(lock, boost::posix_time::milliseconds(1000)); } } #endif closeLowestLayer(); } void AsioSSLDuplexChannel::closeLowestLayer() { boost::system::error_code ec; ssl_socket->lowest_layer().cancel(ec); if (ec) { LOG4CXX_WARN(logger, "Channel " << this << " canceling io error : " << ec.message().c_str()); } ssl_socket->lowest_layer().shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec); if (ec) { LOG4CXX_WARN(logger, "Channel " << this << " shutdown error : " << ec.message().c_str()); } ssl_socket->lowest_layer().close(ec); if (ec) { LOG4CXX_WARN(logger, "Channel " << this << " close error : " << ec.message().c_str()); } } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/channel.h000066400000000000000000000364651244507361200246560ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef HEDWIG_CHANNEL_H #define HEDWIG_CHANNEL_H #include #include #include #include "util.h" #include "data.h" #include "eventdispatcher.h" #ifdef USE_BOOST_TR1 #include #include #else #include #include #endif #include #include #include #include #include #include #include #include #include namespace Hedwig { class ChannelException : public std::exception { }; class UninitialisedChannelException : public ChannelException {}; class ChannelConnectException : public ChannelException {}; class CannotCreateSocketException : public ChannelConnectException {}; class ChannelSetupException : public ChannelConnectException {}; class ChannelNotConnectedException : public ChannelConnectException {}; class ChannelDiedException : public ChannelException {}; class ChannelWriteException : public ChannelException {}; class ChannelReadException : public ChannelException {}; class ChannelThreadException : public ChannelException {}; class ChannelOutOfMemoryException : public ChannelException {}; class InvalidSSLPermFileException : public std::exception {}; class DuplexChannel; typedef boost::shared_ptr DuplexChannelPtr; typedef boost::asio::ip::tcp::socket boost_socket; typedef boost::shared_ptr boost_socket_ptr; typedef boost::asio::ssl::stream boost_ssl_socket; typedef boost::shared_ptr boost_ssl_socket_ptr; class ChannelHandler { public: virtual void messageReceived(const DuplexChannelPtr& channel, const PubSubResponsePtr& m) = 0; virtual void channelConnected(const DuplexChannelPtr& channel) = 0; virtual void channelDisconnected(const DuplexChannelPtr& channel, const std::exception& e) = 0; virtual void exceptionOccurred(const DuplexChannelPtr& channel, const std::exception& e) = 0; virtual ~ChannelHandler() {} }; typedef boost::shared_ptr ChannelHandlerPtr; // A channel interface to send requests class DuplexChannel { public: virtual ~DuplexChannel() {} // Return the channel handler bound with a channel virtual ChannelHandlerPtr getChannelHandler() = 0; // Issues a connect request to the target host // User could writeRequest after issued connect request, those requests should // be buffered and written until the channel is connected. virtual void connect() = 0; // Issues a connect request to the target host // User could writeRequest after issued connect request, those requests should // be buffered and written until the channel is connected. // The provided callback would be triggered after connected. virtual void connect(const OperationCallbackPtr& callback) = 0; // Write the request to underlying channel // If the channel is not established, all write requests would be buffered // until channel is connected. virtual void writeRequest(const PubSubRequestPtr& m, const OperationCallbackPtr& callback) = 0; // Returns the remote address where this channel is connected to. virtual const HostAddress& getHostAddress() const = 0; // Resumes the read operation of this channel asynchronously virtual void startReceiving() = 0; // Suspends the read operation of this channel asynchronously virtual void stopReceiving() = 0; // Returns if and only if the channel will read a message virtual bool isReceiving() = 0; // // Transaction operations // // Store a pub/sub request virtual void storeTransaction(const PubSubDataPtr& data) = 0; // Remove a pub/sub request virtual PubSubDataPtr retrieveTransaction(long txnid) = 0; // Fail all transactions virtual void failAllTransactions() = 0; // Handle the case that the channel is disconnected due issues found // when reading or writing virtual void channelDisconnected(const std::exception& e) = 0; // Close the channel to release the resources // Once a channel is closed, it can not be open again. Calling this // method on a closed channel has no efffect. virtual void close() = 0; }; typedef boost::asio::ssl::context boost_ssl_context; typedef boost::shared_ptr boost_ssl_context_ptr; class SSLContextFactory { public: SSLContextFactory(const Configuration& conf); ~SSLContextFactory(); boost_ssl_context_ptr createSSLContext(boost::asio::io_service& service); private: const Configuration& conf; std::string sslPemFile; }; typedef boost::shared_ptr SSLContextFactoryPtr; class AbstractDuplexChannel; typedef boost::shared_ptr AbstractDuplexChannelPtr; class AbstractDuplexChannel : public DuplexChannel, public boost::enable_shared_from_this { public: AbstractDuplexChannel(IOServicePtr& service, const HostAddress& addr, const ChannelHandlerPtr& handler); virtual ~AbstractDuplexChannel(); virtual ChannelHandlerPtr getChannelHandler(); // // Connect Operation // // Asio Connect Callback Handler static void connectCallbackHandler(AbstractDuplexChannelPtr channel, OperationCallbackPtr callback, const boost::system::error_code& error); virtual void connect(); virtual void connect(const OperationCallbackPtr& callback); // // Write Operation // // Asio Write Callback Handler static void writeCallbackHandler(AbstractDuplexChannelPtr channel, OperationCallbackPtr callback, const boost::system::error_code& error, std::size_t bytes_transferred); // Write request virtual void writeRequest(const PubSubRequestPtr& m, const OperationCallbackPtr& callback); // get the target host virtual const HostAddress& getHostAddress() const; static void sizeReadCallbackHandler(AbstractDuplexChannelPtr channel, const boost::system::error_code& error, std::size_t bytes_transferred); static void messageReadCallbackHandler(AbstractDuplexChannelPtr channel, std::size_t messagesize, const boost::system::error_code& error, std::size_t bytes_transferred); static void readSize(AbstractDuplexChannelPtr channel); // start receiving responses from underlying channel virtual void startReceiving(); // is the underlying channel in receiving state virtual bool isReceiving(); // stop receiving responses from underlying channel virtual void stopReceiving(); // Store a pub/sub request virtual void storeTransaction(const PubSubDataPtr& data); // Remove a pub/sub request virtual PubSubDataPtr retrieveTransaction(long txnid); // Fail all transactions virtual void failAllTransactions(); // channel is disconnected for a specified exception virtual void channelDisconnected(const std::exception& e); // close the channel virtual void close(); inline boost::asio::io_service & getService() const { return service; } protected: // execute the connect operation virtual void doConnect(const OperationCallbackPtr& callback) = 0; virtual void doAfterConnect(const OperationCallbackPtr& callback, const boost::system::error_code& error); // Execute the action after channel connect // It would be executed in asio connect callback handler virtual void setSocketOption(boost::system::error_code& ec) = 0; virtual boost::asio::ip::tcp::endpoint getRemoteAddress(boost::system::error_code& ec) = 0; virtual boost::asio::ip::tcp::endpoint getLocalAddress(boost::system::error_code& ec) = 0; // Channel failed to connect virtual void channelConnectFailed(const std::exception& e, const OperationCallbackPtr& callback); // Channel connected virtual void channelConnected(const OperationCallbackPtr& callback); // Start sending buffered requests to target host void startSending(); // Write a buffer to underlying socket virtual void writeBuffer(boost::asio::streambuf& buffer, const OperationCallbackPtr& callback) = 0; // Read a message from underlying socket virtual void readMsgSize(boost::asio::streambuf& buffer) = 0; virtual void readMsgBody(boost::asio::streambuf& buffer, int toread, uint32_t msgSize) = 0; // is the channel under closing bool isClosed(); // close the underlying socket to release resource virtual void closeSocket() = 0; enum State { UNINITIALISED, CONNECTING, CONNECTED, DEAD }; void setState(State s); // Address HostAddress address; private: ChannelHandlerPtr handler; boost::asio::io_service &service; // buffers for input stream boost::asio::streambuf in_buf; std::istream instream; // only exists because protobufs can't play nice with streams // (if there's more than message len in it, it tries to read all) char* copy_buf; std::size_t copy_buf_length; // buffers for output stream boost::asio::streambuf out_buf; // write requests queue typedef std::pair WriteRequest; boost::mutex write_lock; std::deque write_queue; // channel state State state; boost::shared_mutex state_lock; // reading state bool receiving; bool reading; PubSubResponsePtr outstanding_response; boost::mutex receiving_lock; // sending state bool sending; boost::mutex sending_lock; // flag indicates the channel is closed // some callback might return when closing bool closed; // transactions typedef std::tr1::unordered_map TransactionMap; TransactionMap txnid2data; boost::mutex txnid2data_lock; boost::shared_mutex destruction_lock; }; class AsioDuplexChannel : public AbstractDuplexChannel { public: AsioDuplexChannel(IOServicePtr& service, const HostAddress& addr, const ChannelHandlerPtr& handler); virtual ~AsioDuplexChannel(); protected: // execute the connect operation virtual void doConnect(const OperationCallbackPtr& callback); // Execute the action after channel connect // It would be executed in asio connect callback handler virtual void setSocketOption(boost::system::error_code& ec); virtual boost::asio::ip::tcp::endpoint getRemoteAddress(boost::system::error_code& ec); virtual boost::asio::ip::tcp::endpoint getLocalAddress(boost::system::error_code& ec); // Write a buffer to underlying socket virtual void writeBuffer(boost::asio::streambuf& buffer, const OperationCallbackPtr& callback); // Read a message from underlying socket virtual void readMsgSize(boost::asio::streambuf& buffer); virtual void readMsgBody(boost::asio::streambuf& buffer, int toread, uint32_t msgSize); // close the underlying socket to release resource virtual void closeSocket(); private: // underlying socket boost_socket_ptr socket; }; typedef boost::shared_ptr AsioDuplexChannelPtr; class AsioSSLDuplexChannel; typedef boost::shared_ptr AsioSSLDuplexChannelPtr; class AsioSSLDuplexChannel : public AbstractDuplexChannel { public: AsioSSLDuplexChannel(IOServicePtr& service, const boost_ssl_context_ptr& sslCtx, const HostAddress& addr, const ChannelHandlerPtr& handler); virtual ~AsioSSLDuplexChannel(); protected: // execute the connect operation virtual void doConnect(const OperationCallbackPtr& callback); // Execute the action after channel connect // It would be executed in asio connect callback handler virtual void setSocketOption(boost::system::error_code& ec); virtual boost::asio::ip::tcp::endpoint getRemoteAddress(boost::system::error_code& ec); virtual boost::asio::ip::tcp::endpoint getLocalAddress(boost::system::error_code& ec); virtual void channelConnected(const OperationCallbackPtr& callback); // Start SSL Hand Shake after the channel is connected void startHandShake(const OperationCallbackPtr& callback); // Asio Callback After Hand Shake static void handleHandshake(AsioSSLDuplexChannelPtr channel, OperationCallbackPtr callback, const boost::system::error_code& error); void sslChannelConnected(const OperationCallbackPtr& callback); // Write a buffer to underlying socket virtual void writeBuffer(boost::asio::streambuf& buffer, const OperationCallbackPtr& callback); // Read a message from underlying socket virtual void readMsgSize(boost::asio::streambuf& buffer); virtual void readMsgBody(boost::asio::streambuf& buffer, int toread, uint32_t msgSize); // close the underlying socket to release resource virtual void closeSocket(); private: #ifndef __APPLE__ // Shutdown ssl void sslShutdown(); // Handle ssl shutdown void handleSSLShutdown(const boost::system::error_code& error); #endif // Close lowest layer void closeLowestLayer(); // underlying ssl socket boost_ssl_socket_ptr ssl_socket; // ssl context boost_ssl_context_ptr ssl_ctx; #ifndef __APPLE__ // Flag indicated ssl is closed. bool sslclosed; boost::mutex sslclosed_lock; boost::condition_variable sslclosed_cond; #endif }; struct DuplexChannelPtrHash : public std::unary_function { size_t operator()(const Hedwig::DuplexChannelPtr& channel) const { return reinterpret_cast(channel.get()); } }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/client.cpp000066400000000000000000000051311244507361200250410ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include "clientimpl.h" #include static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; const std::string Configuration::DEFAULT_SERVER = "hedwig.cpp.default_server"; const std::string Configuration::MESSAGE_CONSUME_RETRY_WAIT_TIME = "hedwig.cpp.message_consume_retry_wait_time"; const std::string Configuration::SUBSCRIBER_CONSUME_RETRY_WAIT_TIME = "hedwig.cpp.subscriber_consume_retry_wait_time"; const std::string Configuration::MAX_MESSAGE_QUEUE_SIZE = "hedwig.cpp.max_msgqueue_size"; const std::string Configuration::RECONNECT_SUBSCRIBE_RETRY_WAIT_TIME = "hedwig.cpp.reconnect_subscribe_retry_wait_time"; const std::string Configuration::SYNC_REQUEST_TIMEOUT = "hedwig.cpp.sync_request_timeout"; const std::string Configuration::SUBSCRIBER_AUTOCONSUME = "hedwig.cpp.subscriber_autoconsume"; const std::string Configuration::NUM_DISPATCH_THREADS = "hedwig.cpp.num_dispatch_threads"; const std::string Configuration::SUBSCRIPTION_MESSAGE_BOUND = "hedwig.cpp.subscription_message_bound"; const std::string Configuration::SSL_ENABLED = "hedwig.cpp.ssl_enabled"; const std::string Configuration::SSL_PEM_FILE = "hedwig.cpp.ssl_pem"; const std::string Configuration::SUBSCRIPTION_CHANNEL_SHARING_ENABLED = "hedwig.cpp.subscription_channel_sharing_enabled"; Client::Client(const Configuration& conf) { LOG4CXX_DEBUG(logger, "Client::Client (" << this << ")"); clientimpl = ClientImpl::Create( conf ); } Subscriber& Client::getSubscriber() { return clientimpl->getSubscriber(); } Publisher& Client::getPublisher() { return clientimpl->getPublisher(); } Client::~Client() { LOG4CXX_DEBUG(logger, "Client::~Client (" << this << ")"); clientimpl->Destroy(); } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/clientimpl.cpp000066400000000000000000000607141244507361200257330ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "clientimpl.h" #include "channel.h" #include "publisherimpl.h" #include "subscriberimpl.h" #include "simplesubscriberimpl.h" #include "multiplexsubscriberimpl.h" #include static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; const int DEFAULT_MESSAGE_FORCE_CONSUME_RETRY_WAIT_TIME = 5000; const std::string DEFAULT_SERVER_DEFAULT_VAL = ""; const bool DEFAULT_SSL_ENABLED = false; void SyncOperationCallback::wait() { boost::unique_lock lock(mut); while(response==PENDING) { if (cond.timed_wait(lock, boost::posix_time::milliseconds(timeout)) == false) { LOG4CXX_ERROR(logger, "Timeout waiting for operation to complete " << this); response = TIMEOUT; } } } void SyncOperationCallback::operationComplete() { if (response == TIMEOUT) { LOG4CXX_ERROR(logger, "operationCompleted successfully after timeout " << this); return; } { boost::lock_guard lock(mut); response = SUCCESS; } cond.notify_all(); } void SyncOperationCallback::operationFailed(const std::exception& exception) { if (response == TIMEOUT) { LOG4CXX_ERROR(logger, "operationCompleted unsuccessfully after timeout " << this); return; } { boost::lock_guard lock(mut); if (typeid(exception) == typeid(ChannelConnectException)) { response = NOCONNECT; } else if (typeid(exception) == typeid(ServiceDownException)) { response = SERVICEDOWN; } else if (typeid(exception) == typeid(AlreadySubscribedException)) { response = ALREADY_SUBSCRIBED; } else if (typeid(exception) == typeid(NotSubscribedException)) { response = NOT_SUBSCRIBED; } else { response = UNKNOWN; } } cond.notify_all(); } void SyncOperationCallback::throwExceptionIfNeeded() { switch (response) { case SUCCESS: break; case NOCONNECT: throw CannotConnectException(); break; case SERVICEDOWN: throw ServiceDownException(); break; case ALREADY_SUBSCRIBED: throw AlreadySubscribedException(); break; case NOT_SUBSCRIBED: throw NotSubscribedException(); break; case TIMEOUT: throw ClientTimeoutException(); break; default: throw ClientException(); break; } } ResponseHandler::ResponseHandler(const DuplexChannelManagerPtr& channelManager) : channelManager(channelManager) { } void ResponseHandler::redirectRequest(const PubSubResponsePtr& response, const PubSubDataPtr& data, const DuplexChannelPtr& channel) { HostAddress oldhost = channel->getHostAddress(); data->addTriedServer(oldhost); HostAddress h; bool redirectToDefaultHost = true; try { if (response->has_statusmsg()) { try { h = HostAddress::fromString(response->statusmsg()); redirectToDefaultHost = false; } catch (std::exception& e) { h = channelManager->getDefaultHost(); } } else { h = channelManager->getDefaultHost(); } } catch (std::exception& e) { LOG4CXX_ERROR(logger, "Failed to retrieve redirected host of request " << *data << " : " << e.what()); data->getCallback()->operationFailed(InvalidRedirectException()); return; } if (data->hasTriedServer(h)) { LOG4CXX_ERROR(logger, "We've been told to try request [" << data->getTxnId() << "] with [" << h.getAddressString()<< "] by " << oldhost.getAddressString() << " but we've already tried that. Failing operation"); data->getCallback()->operationFailed(InvalidRedirectException()); return; } LOG4CXX_INFO(logger, "We've been told [" << data->getTopic() << "] is on [" << h.getAddressString() << "] by [" << oldhost.getAddressString() << "]. Redirecting request " << data->getTxnId()); data->setShouldClaim(true); // submit the request again to the target host if (redirectToDefaultHost) { channelManager->submitOpToDefaultServer(data); } else { channelManager->redirectOpToHost(data, h); } } HedwigClientChannelHandler::HedwigClientChannelHandler(const DuplexChannelManagerPtr& channelManager, ResponseHandlerMap& handlers) : channelManager(channelManager), handlers(handlers), closed(false), disconnected(false) { } void HedwigClientChannelHandler::messageReceived(const DuplexChannelPtr& channel, const PubSubResponsePtr& m) { LOG4CXX_DEBUG(logger, "Message received txnid(" << m->txnid() << ") status(" << m->statuscode() << ")"); if (m->has_message()) { LOG4CXX_ERROR(logger, "Subscription response, ignore for now"); return; } PubSubDataPtr data = channel->retrieveTransaction(m->txnid()); /* you now have ownership of data, don't leave this funciton without deleting it or palming it off to someone else */ if (data.get() == 0) { LOG4CXX_ERROR(logger, "No pub/sub request for txnid(" << m->txnid() << ")."); return; } // Store the topic2Host mapping if this wasn't a server redirect. // TODO: add specific response for failure of getting topic ownership // to distinguish SERVICE_DOWN to failure of getting topic ownership if (m->statuscode() != NOT_RESPONSIBLE_FOR_TOPIC) { const HostAddress& host = channel->getHostAddress(); channelManager->setHostForTopic(data->getTopic(), host); } const ResponseHandlerPtr& respHandler = handlers[data->getType()]; if (respHandler.get()) { respHandler->handleResponse(m, data, channel); } else { LOG4CXX_ERROR(logger, "Unimplemented request type " << data->getType() << " : " << *data); data->getCallback()->operationFailed(UnknownRequestException()); } } void HedwigClientChannelHandler::channelConnected(const DuplexChannelPtr& channel) { // do nothing } void HedwigClientChannelHandler::channelDisconnected(const DuplexChannelPtr& channel, const std::exception& e) { if (channelManager->isClosed()) { return; } // If this channel was closed explicitly by the client code, // we do not need to do any of this logic. This could happen // for redundant Publish channels created or redirected subscribe // channels that are not used anymore or when we shutdown the // client and manually close all of the open channels. // Also don't do any of the disconnect logic if the client has stopped. { boost::lock_guard lock(close_lock); if (closed) { return; } if (disconnected) { return; } disconnected = true; } LOG4CXX_INFO(logger, "Channel " << channel.get() << " was disconnected."); // execute logic after channel disconnected onChannelDisconnected(channel); } void HedwigClientChannelHandler::onChannelDisconnected(const DuplexChannelPtr& channel) { // Clean up the channel from channel manager channelManager->nonSubscriptionChannelDied(channel); } void HedwigClientChannelHandler::exceptionOccurred(const DuplexChannelPtr& channel, const std::exception& e) { LOG4CXX_ERROR(logger, "Exception occurred" << e.what()); } void HedwigClientChannelHandler::close() { { boost::lock_guard lock(close_lock); if (closed) { return; } closed = true; } // do close handle logic here doClose(); } void HedwigClientChannelHandler::doClose() { // do nothing for generic client channel handler } // // Pub/Sub Request Write Callback // PubSubWriteCallback::PubSubWriteCallback(const DuplexChannelPtr& channel, const PubSubDataPtr& data) : channel(channel), data(data) { } void PubSubWriteCallback::operationComplete() { LOG4CXX_INFO(logger, "Successfully wrote pubsub request : " << *data << " to channel " << channel.get()); } void PubSubWriteCallback::operationFailed(const std::exception& exception) { LOG4CXX_ERROR(logger, "Error writing pubsub request (" << *data << ") : " << exception.what()); // remove the transaction from channel if write failed channel->retrieveTransaction(data->getTxnId()); data->getCallback()->operationFailed(exception); } // // Default Server Connect Callback // DefaultServerConnectCallback::DefaultServerConnectCallback(const DuplexChannelManagerPtr& channelManager, const DuplexChannelPtr& channel, const PubSubDataPtr& data) : channelManager(channelManager), channel(channel), data(data) { } void DefaultServerConnectCallback::operationComplete() { LOG4CXX_DEBUG(logger, "Channel " << channel.get() << " is connected to host " << channel->getHostAddress() << "."); // After connected, we got the right ip for the target host // so we could submit the request right now channelManager->submitOpThruChannel(data, channel); } void DefaultServerConnectCallback::operationFailed(const std::exception& exception) { LOG4CXX_ERROR(logger, "Channel " << channel.get() << " failed to connect to host " << channel->getHostAddress() << " : " << exception.what()); data->getCallback()->operationFailed(exception); } // // Subscription Event Emitter // SubscriptionEventEmitter::SubscriptionEventEmitter() {} void SubscriptionEventEmitter::addSubscriptionListener( SubscriptionListenerPtr& listener) { boost::lock_guard lock(listeners_lock); listeners.insert(listener); } void SubscriptionEventEmitter::removeSubscriptionListener( SubscriptionListenerPtr& listener) { boost::lock_guard lock(listeners_lock); listeners.erase(listener); } void SubscriptionEventEmitter::emitSubscriptionEvent( const std::string& topic, const std::string& subscriberId, const SubscriptionEvent event) { boost::shared_lock lock(listeners_lock); if (0 == listeners.size()) { return; } for (SubscriptionListenerSet::iterator iter = listeners.begin(); iter != listeners.end(); ++iter) { (*iter)->processEvent(topic, subscriberId, event); } } // // Channel Manager Used to manage all established channels // DuplexChannelManagerPtr DuplexChannelManager::create(const Configuration& conf) { DuplexChannelManager * managerPtr; if (conf.getBool(Configuration::SUBSCRIPTION_CHANNEL_SHARING_ENABLED, false)) { managerPtr = new MultiplexDuplexChannelManager(conf); } else { managerPtr = new SimpleDuplexChannelManager(conf); } DuplexChannelManagerPtr manager(managerPtr); LOG4CXX_DEBUG(logger, "Created DuplexChannelManager " << manager.get()); return manager; } DuplexChannelManager::DuplexChannelManager(const Configuration& conf) : dispatcher(new EventDispatcher(conf)), conf(conf), closed(false), counterobj(), defaultHostAddress(conf.get(Configuration::DEFAULT_SERVER, DEFAULT_SERVER_DEFAULT_VAL)) { sslEnabled = conf.getBool(Configuration::SSL_ENABLED, DEFAULT_SSL_ENABLED); if (sslEnabled) { sslCtxFactory = SSLContextFactoryPtr(new SSLContextFactory(conf)); } LOG4CXX_DEBUG(logger, "Created DuplexChannelManager " << this << " with default server " << defaultHostAddress); } DuplexChannelManager::~DuplexChannelManager() { LOG4CXX_DEBUG(logger, "Destroyed DuplexChannelManager " << this); } void DuplexChannelManager::submitTo(const PubSubDataPtr& op, const DuplexChannelPtr& channel) { if (channel.get()) { channel->storeTransaction(op); OperationCallbackPtr writecb(new PubSubWriteCallback(channel, op)); LOG4CXX_DEBUG(logger, "Submit pub/sub request " << *op << " thru channel " << channel.get()); channel->writeRequest(op->getRequest(), writecb); } else { submitOpToDefaultServer(op); } } // Submit a pub/sub request void DuplexChannelManager::submitOp(const PubSubDataPtr& op) { DuplexChannelPtr channel; switch (op->getType()) { case PUBLISH: case UNSUBSCRIBE: try { channel = getNonSubscriptionChannel(op->getTopic()); } catch (std::exception& e) { LOG4CXX_ERROR(logger, "Failed to submit request " << *op << " : " << e.what()); op->getCallback()->operationFailed(e); return; } break; default: TopicSubscriber ts(op->getTopic(), op->getSubscriberId()); channel = getSubscriptionChannel(ts, op->isResubscribeRequest()); break; } // write the pub/sub request submitTo(op, channel); } // Submit a pub/sub request to target host void DuplexChannelManager::redirectOpToHost(const PubSubDataPtr& op, const HostAddress& addr) { DuplexChannelPtr channel; switch (op->getType()) { case PUBLISH: case UNSUBSCRIBE: // check whether there is a channel existed for non-subscription requests channel = getNonSubscriptionChannel(addr); if (!channel.get()) { channel = createNonSubscriptionChannel(addr); channel = storeNonSubscriptionChannel(channel, true); } break; default: channel = getSubscriptionChannel(addr); if (!channel.get()) { channel = createSubscriptionChannel(addr); channel = storeSubscriptionChannel(channel, true); } break; } // write the pub/sub request submitTo(op, channel); } // Submit a pub/sub request to established request void DuplexChannelManager::submitOpThruChannel(const PubSubDataPtr& op, const DuplexChannelPtr& ch) { DuplexChannelPtr channel; switch (op->getType()) { case PUBLISH: case UNSUBSCRIBE: channel = storeNonSubscriptionChannel(ch, false); break; default: channel = storeSubscriptionChannel(ch, false); break; } // write the pub/sub request submitTo(op, channel); } // Submit a pub/sub request to default server void DuplexChannelManager::submitOpToDefaultServer(const PubSubDataPtr& op) { DuplexChannelPtr channel; try { switch (op->getType()) { case PUBLISH: case UNSUBSCRIBE: channel = createNonSubscriptionChannel(getDefaultHost()); break; default: channel = createSubscriptionChannel(getDefaultHost()); break; } } catch (std::exception& e) { LOG4CXX_ERROR(logger, "Failed to create channel to default host " << defaultHostAddress << " for request " << op << " : " << e.what()); op->getCallback()->operationFailed(e); return; } OperationCallbackPtr connectCallback(new DefaultServerConnectCallback(shared_from_this(), channel, op)); // connect to default server. usually default server is a VIP, we only got the real // IP address after connected. so before connected, we don't know the real target host. // we only submit the request after channel is connected (ip address would be updated). channel->connect(connectCallback); } DuplexChannelPtr DuplexChannelManager::getNonSubscriptionChannel(const std::string& topic) { HostAddress addr; { boost::shared_lock lock(topic2host_lock); addr = topic2host[topic]; } if (addr.isNullHost()) { return DuplexChannelPtr(); } else { // we had known which hub server owned the topic DuplexChannelPtr ch = getNonSubscriptionChannel(addr); if (ch.get()) { return ch; } ch = createNonSubscriptionChannel(addr); return storeNonSubscriptionChannel(ch, true); } } DuplexChannelPtr DuplexChannelManager::getNonSubscriptionChannel(const HostAddress& addr) { boost::shared_lock lock(host2channel_lock); return host2channel[addr]; } DuplexChannelPtr DuplexChannelManager::createNonSubscriptionChannel(const HostAddress& addr) { // Create a non-subscription channel handler ChannelHandlerPtr handler(new HedwigClientChannelHandler(shared_from_this(), nonSubscriptionHandlers)); // Create a non subscription channel return createChannel(dispatcher->getService(), addr, handler); } DuplexChannelPtr DuplexChannelManager::storeNonSubscriptionChannel(const DuplexChannelPtr& ch, bool doConnect) { const HostAddress& host = ch->getHostAddress(); bool useOldCh; DuplexChannelPtr oldCh; { boost::lock_guard lock(host2channel_lock); oldCh = host2channel[host]; if (!oldCh.get()) { host2channel[host] = ch; useOldCh = false; } else { // If we've reached here, that means we already have a Channel // mapping for the given host. This should ideally not happen // and it means we are creating another Channel to a server host // to publish on when we could have used an existing one. This could // happen due to a race condition if initially multiple concurrent // threads are publishing on the same topic and no Channel exists // currently to the server. We are not synchronizing this initial // creation of Channels to a given host for performance. // Another possible way to have redundant Channels created is if // a new topic is being published to, we connect to the default // server host which should be a VIP that redirects to a "real" // server host. Since we don't know beforehand what is the full // set of server hosts, we could be redirected to a server that // we already have a channel connection to from a prior existing // topic. Close these redundant channels as they won't be used. useOldCh = true; } } if (useOldCh) { LOG4CXX_DEBUG(logger, "Channel " << oldCh.get() << " to host " << host << " already exists so close channel " << ch.get() << "."); ch->close(); return oldCh; } else { if (doConnect) { ch->connect(); } LOG4CXX_DEBUG(logger, "Storing channel " << ch.get() << " for host " << host << "."); return ch; } } DuplexChannelPtr DuplexChannelManager::createChannel(IOServicePtr& service, const HostAddress& addr, const ChannelHandlerPtr& handler) { DuplexChannelPtr channel; if (sslEnabled) { boost_ssl_context_ptr sslCtx = sslCtxFactory->createSSLContext(service->getService()); channel = DuplexChannelPtr(new AsioSSLDuplexChannel(service, sslCtx, addr, handler)); } else { channel = DuplexChannelPtr(new AsioDuplexChannel(service, addr, handler)); } boost::lock_guard lock(allchannels_lock); if (closed) { channel->close(); throw ShuttingDownException(); } allchannels.insert(channel); LOG4CXX_DEBUG(logger, "Created a channel to " << addr << ", all channels : " << allchannels.size()); return channel; } long DuplexChannelManager::nextTxnId() { return counterobj.next(); } void DuplexChannelManager::setHostForTopic(const std::string& topic, const HostAddress& host) { boost::lock_guard h2clock(host2topics_lock); boost::lock_guard t2hlock(topic2host_lock); topic2host[topic] = host; TopicSetPtr ts = host2topics[host]; if (!ts.get()) { ts = TopicSetPtr(new TopicSet()); host2topics[host] = ts; } ts->insert(topic); LOG4CXX_DEBUG(logger, "Set ownership of topic " << topic << " to " << host << "."); } void DuplexChannelManager::clearAllTopicsForHost(const HostAddress& addr) { // remove topic mapping boost::lock_guard h2tlock(host2topics_lock); boost::lock_guard t2hlock(topic2host_lock); Host2TopicsMap::iterator iter = host2topics.find(addr); if (iter != host2topics.end()) { for (TopicSet::iterator tsIter = iter->second->begin(); tsIter != iter->second->end(); ++tsIter) { topic2host.erase(*tsIter); } host2topics.erase(iter); } } void DuplexChannelManager::clearHostForTopic(const std::string& topic, const HostAddress& addr) { // remove topic mapping boost::lock_guard h2tlock(host2topics_lock); boost::lock_guard t2hlock(topic2host_lock); Host2TopicsMap::iterator iter = host2topics.find(addr); if (iter != host2topics.end()) { iter->second->erase(topic); } HostAddress existed = topic2host[topic]; if (existed == addr) { topic2host.erase(topic); } } const HostAddress& DuplexChannelManager::getHostForTopic(const std::string& topic) { boost::shared_lock t2hlock(topic2host_lock); return topic2host[topic]; } /** A channel has just died. Remove it so we never give it to any other publisher or subscriber. This does not delete the channel. Some publishers or subscribers will still hold it and will be errored when they try to do anything with it. */ void DuplexChannelManager::nonSubscriptionChannelDied(const DuplexChannelPtr& channel) { // get host HostAddress addr = channel->getHostAddress(); // Clear the topic owner ship when a nonsubscription channel disconnected clearAllTopicsForHost(addr); // remove channel mapping { boost::lock_guard h2clock(host2channel_lock); host2channel.erase(addr); } removeChannel(channel); } void DuplexChannelManager::removeChannel(const DuplexChannelPtr& channel) { { boost::lock_guard aclock(allchannels_lock); allchannels.erase(channel); // channel should be deleted here } channel->close(); } void DuplexChannelManager::start() { // add non-subscription response handlers nonSubscriptionHandlers[PUBLISH] = ResponseHandlerPtr(new PublishResponseHandler(shared_from_this())); nonSubscriptionHandlers[UNSUBSCRIBE] = ResponseHandlerPtr(new UnsubscribeResponseHandler(shared_from_this())); // start the dispatcher dispatcher->start(); } bool DuplexChannelManager::isClosed() { boost::shared_lock lock(allchannels_lock); return closed; } void DuplexChannelManager::close() { // stop the dispatcher dispatcher->stop(); { boost::lock_guard lock(allchannels_lock); closed = true; for (ChannelMap::iterator iter = allchannels.begin(); iter != allchannels.end(); ++iter ) { (*iter)->close(); } allchannels.clear(); } // Unregistered response handlers nonSubscriptionHandlers.clear(); /* destruction of the maps will clean up any items they hold */ } ClientImplPtr ClientImpl::Create(const Configuration& conf) { ClientImplPtr impl(new ClientImpl(conf)); LOG4CXX_DEBUG(logger, "Creating Clientimpl " << impl); impl->channelManager->start(); return impl; } void ClientImpl::Destroy() { LOG4CXX_DEBUG(logger, "destroying Clientimpl " << this); // close the channel manager channelManager->close(); if (subscriber != NULL) { delete subscriber; subscriber = NULL; } if (publisher != NULL) { delete publisher; publisher = NULL; } } ClientImpl::ClientImpl(const Configuration& conf) : conf(conf), publisher(NULL), subscriber(NULL) { channelManager = DuplexChannelManager::create(conf); } Subscriber& ClientImpl::getSubscriber() { return getSubscriberImpl(); } Publisher& ClientImpl::getPublisher() { return getPublisherImpl(); } SubscriberImpl& ClientImpl::getSubscriberImpl() { if (subscriber == NULL) { boost::lock_guard lock(subscribercreate_lock); if (subscriber == NULL) { subscriber = new SubscriberImpl(channelManager); } } return *subscriber; } PublisherImpl& ClientImpl::getPublisherImpl() { if (publisher == NULL) { boost::lock_guard lock(publishercreate_lock); if (publisher == NULL) { publisher = new PublisherImpl(channelManager); } } return *publisher; } ClientImpl::~ClientImpl() { LOG4CXX_DEBUG(logger, "deleting Clientimpl " << this); } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/clientimpl.h000066400000000000000000000374011244507361200253750ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef HEDWIG_CLIENT_IMPL_H #define HEDWIG_CLIENT_IMPL_H #include #include #include #include #include #include #include #ifdef USE_BOOST_TR1 #include #else #include #endif #include #include "util.h" #include "channel.h" #include "data.h" #include "eventdispatcher.h" namespace Hedwig { const int DEFAULT_SYNC_REQUEST_TIMEOUT = 5000; template class SyncCallback : public Callback { public: SyncCallback(int timeout) : response(PENDING), timeout(timeout) {} virtual void operationComplete(const R& r) { if (response == TIMEOUT) { return; } { boost::lock_guard lock(mut); response = SUCCESS; result = r; } cond.notify_all(); } virtual void operationFailed(const std::exception& exception) { if (response == TIMEOUT) { return; } { boost::lock_guard lock(mut); if (typeid(exception) == typeid(ChannelConnectException)) { response = NOCONNECT; } else if (typeid(exception) == typeid(ServiceDownException)) { response = SERVICEDOWN; } else if (typeid(exception) == typeid(AlreadySubscribedException)) { response = ALREADY_SUBSCRIBED; } else if (typeid(exception) == typeid(NotSubscribedException)) { response = NOT_SUBSCRIBED; } else { response = UNKNOWN; } } cond.notify_all(); } void wait() { boost::unique_lock lock(mut); while(response==PENDING) { if (cond.timed_wait(lock, boost::posix_time::milliseconds(timeout)) == false) { response = TIMEOUT; } } } void throwExceptionIfNeeded() { switch (response) { case SUCCESS: break; case NOCONNECT: throw CannotConnectException(); break; case SERVICEDOWN: throw ServiceDownException(); break; case ALREADY_SUBSCRIBED: throw AlreadySubscribedException(); break; case NOT_SUBSCRIBED: throw NotSubscribedException(); break; case TIMEOUT: throw ClientTimeoutException(); break; default: throw ClientException(); break; } } R getResult() { return result; } private: enum { PENDING, SUCCESS, NOCONNECT, SERVICEDOWN, NOT_SUBSCRIBED, ALREADY_SUBSCRIBED, TIMEOUT, UNKNOWN } response; boost::condition_variable cond; boost::mutex mut; int timeout; R result; }; class SyncOperationCallback : public OperationCallback { public: SyncOperationCallback(int timeout) : response(PENDING), timeout(timeout) {} virtual void operationComplete(); virtual void operationFailed(const std::exception& exception); void wait(); void throwExceptionIfNeeded(); private: enum { PENDING, SUCCESS, NOCONNECT, SERVICEDOWN, NOT_SUBSCRIBED, ALREADY_SUBSCRIBED, TIMEOUT, UNKNOWN } response; boost::condition_variable cond; boost::mutex mut; int timeout; }; class DuplexChannelManager; typedef boost::shared_ptr DuplexChannelManagerPtr; // // Hedwig Response Handler // // Response Handler used to process response for different types of requests class ResponseHandler { public: ResponseHandler(const DuplexChannelManagerPtr& channelManager); virtual ~ResponseHandler() {}; virtual void handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel) = 0; protected: // common method used to redirect request void redirectRequest(const PubSubResponsePtr& response, const PubSubDataPtr& data, const DuplexChannelPtr& channel); // channel manager to manage all established channels const DuplexChannelManagerPtr channelManager; }; typedef boost::shared_ptr ResponseHandlerPtr; typedef std::tr1::unordered_map ResponseHandlerMap; class PubSubWriteCallback : public OperationCallback { public: PubSubWriteCallback(const DuplexChannelPtr& channel, const PubSubDataPtr& data); virtual void operationComplete(); virtual void operationFailed(const std::exception& exception); private: DuplexChannelPtr channel; PubSubDataPtr data; }; class DefaultServerConnectCallback : public OperationCallback { public: DefaultServerConnectCallback(const DuplexChannelManagerPtr& channelManager, const DuplexChannelPtr& channel, const PubSubDataPtr& data); virtual void operationComplete(); virtual void operationFailed(const std::exception& exception); private: DuplexChannelManagerPtr channelManager; DuplexChannelPtr channel; PubSubDataPtr data; }; struct SubscriptionListenerPtrHash : public std::unary_function { size_t operator()(const Hedwig::SubscriptionListenerPtr& listener) const { return reinterpret_cast(listener.get()); } }; // Subscription Event Emitter class SubscriptionEventEmitter { public: SubscriptionEventEmitter(); void addSubscriptionListener(SubscriptionListenerPtr& listener); void removeSubscriptionListener(SubscriptionListenerPtr& listener); void emitSubscriptionEvent(const std::string& topic, const std::string& subscriberId, const SubscriptionEvent event); private: typedef std::tr1::unordered_set SubscriptionListenerSet; SubscriptionListenerSet listeners; boost::shared_mutex listeners_lock; }; class SubscriberClientChannelHandler; // // Duplex Channel Manager to manage all established channels // class DuplexChannelManager : public boost::enable_shared_from_this { public: static DuplexChannelManagerPtr create(const Configuration& conf); virtual ~DuplexChannelManager(); inline const Configuration& getConfiguration() const { return conf; } // Submit a pub/sub request void submitOp(const PubSubDataPtr& op); // Submit a pub/sub request to default host // It is called only when client doesn't have the knowledge of topic ownership void submitOpToDefaultServer(const PubSubDataPtr& op); // Redirect pub/sub request to a target hosts void redirectOpToHost(const PubSubDataPtr& op, const HostAddress& host); // Submit a pub/sub request thru established channel // It is called when connecting to default server to established a channel void submitOpThruChannel(const PubSubDataPtr& op, const DuplexChannelPtr& channel); // Generate next transaction id for pub/sub requests sending thru this manager long nextTxnId(); // return default host inline const HostAddress getDefaultHost() { return HostAddress::fromString(defaultHostAddress); } // set the owner host of a topic void setHostForTopic(const std::string& topic, const HostAddress& host); // clear all topics that hosted by a hub server void clearAllTopicsForHost(const HostAddress& host); // clear host for a given topic void clearHostForTopic(const std::string& topic, const HostAddress& host); // Called when a channel is disconnected void nonSubscriptionChannelDied(const DuplexChannelPtr& channel); // Remove a channel from all channel map void removeChannel(const DuplexChannelPtr& channel); // Get the subscription channel handler for a given subscription virtual boost::shared_ptr getSubscriptionChannelHandler(const TopicSubscriber& ts) = 0; // Close subscription for a given subscription virtual void asyncCloseSubscription(const TopicSubscriber& ts, const OperationCallbackPtr& callback) = 0; virtual void handoverDelivery(const TopicSubscriber& ts, const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter) = 0; // start the channel manager virtual void start(); // close the channel manager virtual void close(); // whether the channel manager is closed bool isClosed(); // Return an available service inline boost::asio::io_service & getService() const { return dispatcher->getService()->getService(); } // Return the event emitter inline SubscriptionEventEmitter& getEventEmitter() { return eventEmitter; } protected: DuplexChannelManager(const Configuration& conf); // Get the ownership for a given topic. const HostAddress& getHostForTopic(const std::string& topic); // // Channel Management // // Non subscription channel management // Get a non subscription channel for a given topic // If the topic's owner is known, retrieve a subscription channel to // target host (if there is no channel existed, create one); // If the topic's owner is unknown, return null DuplexChannelPtr getNonSubscriptionChannel(const std::string& topic); // Get an existed non subscription channel to a given host DuplexChannelPtr getNonSubscriptionChannel(const HostAddress& addr); // Create a non subscription channel to a given host DuplexChannelPtr createNonSubscriptionChannel(const HostAddress& addr); // Store the established non subscription channel DuplexChannelPtr storeNonSubscriptionChannel(const DuplexChannelPtr& ch, bool doConnect); // // Subscription Channel Management // // Get a subscription channel for a given subscription. // If there is subscription channel established before, return it. // Otherwise, check whether the topic's owner is known. If the topic owner // is known, retrieve a subscription channel to target host (if there is no // channel exsited, create one); If unknown, return null virtual DuplexChannelPtr getSubscriptionChannel(const TopicSubscriber& ts, const bool isResubscribeRequest) = 0; // Get an existed subscription channel to a given host virtual DuplexChannelPtr getSubscriptionChannel(const HostAddress& addr) = 0; // Create a subscription channel to a given host // If store is true, store the channel for future usage. // If store is false, return a newly created channel. virtual DuplexChannelPtr createSubscriptionChannel(const HostAddress& addr) = 0; // Store the established subscription channel virtual DuplexChannelPtr storeSubscriptionChannel(const DuplexChannelPtr& ch, bool doConnect) = 0; // // Raw Channel Management // // Create a raw channel DuplexChannelPtr createChannel(IOServicePtr& service, const HostAddress& addr, const ChannelHandlerPtr& handler); // event dispatcher running io threads typedef boost::shared_ptr EventDispatcherPtr; EventDispatcherPtr dispatcher; // topic2host mapping for topic ownership std::tr1::unordered_map topic2host; boost::shared_mutex topic2host_lock; typedef std::tr1::unordered_set TopicSet; typedef boost::shared_ptr TopicSetPtr; typedef std::tr1::unordered_map Host2TopicsMap; Host2TopicsMap host2topics; boost::shared_mutex host2topics_lock; private: // write the request to target channel void submitTo(const PubSubDataPtr& op, const DuplexChannelPtr& channel); const Configuration& conf; bool sslEnabled; SSLContextFactoryPtr sslCtxFactory; // whether the channel manager is shutting down bool closed; // counter used for generating transaction ids ClientTxnCounter counterobj; std::string defaultHostAddress; // non-subscription channels std::tr1::unordered_map host2channel; boost::shared_mutex host2channel_lock; // maintain all established channels typedef std::tr1::unordered_set ChannelMap; ChannelMap allchannels; boost::shared_mutex allchannels_lock; // Response Handlers for non-subscription requests ResponseHandlerMap nonSubscriptionHandlers; // Subscription Event Emitter SubscriptionEventEmitter eventEmitter; }; // // Hedwig Client Channel Handler to handle responses received from the channel // class HedwigClientChannelHandler : public ChannelHandler { public: HedwigClientChannelHandler(const DuplexChannelManagerPtr& channelManager, ResponseHandlerMap& handlers); virtual ~HedwigClientChannelHandler() {} virtual void messageReceived(const DuplexChannelPtr& channel, const PubSubResponsePtr& m); virtual void channelConnected(const DuplexChannelPtr& channel); virtual void channelDisconnected(const DuplexChannelPtr& channel, const std::exception& e); virtual void exceptionOccurred(const DuplexChannelPtr& channel, const std::exception& e); void close(); protected: // real channel disconnected logic virtual void onChannelDisconnected(const DuplexChannelPtr& channel); // real close logic virtual void doClose(); // channel manager to manage all established channels const DuplexChannelManagerPtr channelManager; ResponseHandlerMap& handlers; boost::shared_mutex close_lock; // Boolean indicating if we closed the handler explicitly or not. // If so, we do not need to do the channel disconnected logic here. bool closed; // whether channel is disconnected. bool disconnected; }; class PublisherImpl; class SubscriberImpl; /** Implementation of the hedwig client. This class takes care of globals such as the topic->host map and the transaction id counter. */ class ClientImpl : public boost::enable_shared_from_this { public: static ClientImplPtr Create(const Configuration& conf); void Destroy(); Subscriber& getSubscriber(); Publisher& getPublisher(); SubscriberImpl& getSubscriberImpl(); PublisherImpl& getPublisherImpl(); ~ClientImpl(); private: ClientImpl(const Configuration& conf); const Configuration& conf; boost::mutex publishercreate_lock; PublisherImpl* publisher; boost::mutex subscribercreate_lock; SubscriberImpl* subscriber; // channel manager manage all channels for the client DuplexChannelManagerPtr channelManager; }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/data.cpp000066400000000000000000000177711244507361200245110ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include "data.h" #include #include #include #define stringify( name ) #name static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; const char* OPERATION_TYPE_NAMES[] = { stringify( PUBLISH ), stringify( SUBSCRIBE ), stringify( CONSUME ), stringify( UNSUBSCRIBE ), stringify( START_DELIVERY ), stringify( STOP_DELIVERY ), stringify( CLOSESUBSCRIPTION ) }; PubSubDataPtr PubSubData::forPublishRequest(long txnid, const std::string& topic, const Message& body, const ResponseCallbackPtr& callback) { PubSubDataPtr ptr(new PubSubData()); ptr->type = PUBLISH; ptr->txnid = txnid; ptr->topic = topic; ptr->body.CopyFrom(body); ptr->callback = callback; return ptr; } PubSubDataPtr PubSubData::forSubscribeRequest(long txnid, const std::string& subscriberid, const std::string& topic, const ResponseCallbackPtr& callback, const SubscriptionOptions& options) { PubSubDataPtr ptr(new PubSubData()); ptr->type = SUBSCRIBE; ptr->txnid = txnid; ptr->subscriberid = subscriberid; ptr->topic = topic; ptr->callback = callback; ptr->options = options; return ptr; } PubSubDataPtr PubSubData::forUnsubscribeRequest(long txnid, const std::string& subscriberid, const std::string& topic, const ResponseCallbackPtr& callback) { PubSubDataPtr ptr(new PubSubData()); ptr->type = UNSUBSCRIBE; ptr->txnid = txnid; ptr->subscriberid = subscriberid; ptr->topic = topic; ptr->callback = callback; return ptr; } PubSubDataPtr PubSubData::forCloseSubscriptionRequest( long txnid, const std::string& subscriberid, const std::string& topic, const ResponseCallbackPtr& callback) { PubSubDataPtr ptr(new PubSubData()); ptr->type = CLOSESUBSCRIPTION; ptr->txnid = txnid; ptr->subscriberid = subscriberid; ptr->topic = topic; ptr->callback = callback; return ptr; } PubSubDataPtr PubSubData::forConsumeRequest(long txnid, const std::string& subscriberid, const std::string& topic, const MessageSeqId msgid) { PubSubDataPtr ptr(new PubSubData()); ptr->type = CONSUME; ptr->txnid = txnid; ptr->subscriberid = subscriberid; ptr->topic = topic; ptr->msgid = msgid; return ptr; } PubSubData::PubSubData() : shouldClaim(false), messageBound(0) { } PubSubData::~PubSubData() { } OperationType PubSubData::getType() const { return type; } long PubSubData::getTxnId() const { return txnid; } const std::string& PubSubData::getTopic() const { return topic; } const Message& PubSubData::getBody() const { return body; } const MessageSeqId PubSubData::getMessageSeqId() const { return msgid; } void PubSubData::setPreferencesForSubRequest(SubscribeRequest * subreq, const SubscriptionOptions &options) { Hedwig::SubscriptionPreferences* preferences = subreq->mutable_preferences(); if (options.messagebound() > 0) { preferences->set_messagebound(options.messagebound()); } if (options.has_messagefilter()) { preferences->set_messagefilter(options.messagefilter()); } if (options.has_options()) { preferences->mutable_options()->CopyFrom(options.options()); } if (options.has_messagewindowsize()) { preferences->set_messagewindowsize(options.messagewindowsize()); } } const PubSubRequestPtr PubSubData::getRequest() { PubSubRequestPtr request(new Hedwig::PubSubRequest()); request->set_protocolversion(Hedwig::VERSION_ONE); request->set_type(type); request->set_txnid(txnid); if (shouldClaim) { request->set_shouldclaim(shouldClaim); } request->set_topic(topic); if (type == PUBLISH) { LOG4CXX_DEBUG(logger, "Creating publish request"); Hedwig::PublishRequest* pubreq = request->mutable_publishrequest(); Hedwig::Message* msg = pubreq->mutable_msg(); msg->CopyFrom(body); } else if (type == SUBSCRIBE) { LOG4CXX_DEBUG(logger, "Creating subscribe request"); Hedwig::SubscribeRequest* subreq = request->mutable_subscriberequest(); subreq->set_subscriberid(subscriberid); subreq->set_createorattach(options.createorattach()); subreq->set_forceattach(options.forceattach()); setPreferencesForSubRequest(subreq, options); } else if (type == CONSUME) { LOG4CXX_DEBUG(logger, "Creating consume request"); Hedwig::ConsumeRequest* conreq = request->mutable_consumerequest(); conreq->set_subscriberid(subscriberid); conreq->mutable_msgid()->CopyFrom(msgid); } else if (type == UNSUBSCRIBE) { LOG4CXX_DEBUG(logger, "Creating unsubscribe request"); Hedwig::UnsubscribeRequest* unsubreq = request->mutable_unsubscriberequest(); unsubreq->set_subscriberid(subscriberid); } else if (type == CLOSESUBSCRIPTION) { LOG4CXX_DEBUG(logger, "Creating closeSubscription request"); Hedwig::CloseSubscriptionRequest* closesubreq = request->mutable_closesubscriptionrequest(); closesubreq->set_subscriberid(subscriberid); } else { LOG4CXX_ERROR(logger, "Tried to create a request message for the wrong type [" << type << "]"); throw UnknownRequestException(); } return request; } void PubSubData::setShouldClaim(bool shouldClaim) { this->shouldClaim = shouldClaim; } void PubSubData::addTriedServer(HostAddress& h) { triedservers.insert(h); } bool PubSubData::hasTriedServer(HostAddress& h) { return triedservers.count(h) > 0; } void PubSubData::clearTriedServers() { triedservers.clear(); } ResponseCallbackPtr& PubSubData::getCallback() { return callback; } void PubSubData::setCallback(const ResponseCallbackPtr& callback) { this->callback = callback; } const std::string& PubSubData::getSubscriberId() const { return subscriberid; } const SubscriptionOptions& PubSubData::getSubscriptionOptions() const { return options; } void PubSubData::setOrigChannelForResubscribe( boost::shared_ptr& channel) { this->origChannel = channel; } boost::shared_ptr& PubSubData::getOrigChannelForResubscribe() { return this->origChannel; } bool PubSubData::isResubscribeRequest() { return 0 != this->origChannel.get(); } ClientTxnCounter::ClientTxnCounter() : counter(0) { } ClientTxnCounter::~ClientTxnCounter() { } /** Increment the transaction counter and return the new value. @returns the next transaction id */ long ClientTxnCounter::next() { // would be nice to remove lock from here, look more into it boost::lock_guard lock(mutex); long next= ++counter; return next; } std::ostream& Hedwig::operator<<(std::ostream& os, const PubSubData& data) { OperationType type = data.getType(); os << "[" << OPERATION_TYPE_NAMES[type] << " request (txn:" << data.getTxnId() << ") for (topic:" << data.getTopic(); switch (type) { case SUBSCRIBE: case UNSUBSCRIBE: case CLOSESUBSCRIPTION: os << ", subscriber:" << data.getSubscriberId() << ")"; break; case CONSUME: os << ", subscriber:" << data.getSubscriberId() << ", seq:" << data.getMessageSeqId().localcomponent() << ")"; break; case PUBLISH: default: os << ")"; break; } return os; } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/data.h000066400000000000000000000106641244507361200241500ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef DATA_H #define DATA_H #include #include #include #include #ifdef USE_BOOST_TR1 #include #else #include #endif #include "util.h" #include #include namespace Hedwig { /** Simple counter for transaction ids from the client */ class ClientTxnCounter { public: ClientTxnCounter(); ~ClientTxnCounter(); long next(); private: long counter; boost::mutex mutex; }; typedef Callback ResponseCallback; typedef std::tr1::shared_ptr ResponseCallbackPtr; class PubSubData; typedef boost::shared_ptr PubSubDataPtr; typedef boost::shared_ptr PubSubRequestPtr; typedef boost::shared_ptr PubSubResponsePtr; class DuplexChannel; /** Data structure to hold information about requests and build request messages. Used to store requests which may need to be resent to another server. */ class PubSubData { public: // to be used for publish static PubSubDataPtr forPublishRequest(long txnid, const std::string& topic, const Message& body, const ResponseCallbackPtr& callback); static PubSubDataPtr forSubscribeRequest(long txnid, const std::string& subscriberid, const std::string& topic, const ResponseCallbackPtr& callback, const SubscriptionOptions& options); static PubSubDataPtr forUnsubscribeRequest(long txnid, const std::string& subscriberid, const std::string& topic, const ResponseCallbackPtr& callback); static PubSubDataPtr forConsumeRequest(long txnid, const std::string& subscriberid, const std::string& topic, const MessageSeqId msgid); static PubSubDataPtr forCloseSubscriptionRequest(long txnid, const std::string& subscriberid, const std::string& topic, const ResponseCallbackPtr& callback); ~PubSubData(); OperationType getType() const; long getTxnId() const; const std::string& getSubscriberId() const; const std::string& getTopic() const; const Message& getBody() const; const MessageSeqId getMessageSeqId() const; void setShouldClaim(bool shouldClaim); void setMessageBound(int messageBound); const PubSubRequestPtr getRequest(); void setCallback(const ResponseCallbackPtr& callback); ResponseCallbackPtr& getCallback(); const SubscriptionOptions& getSubscriptionOptions() const; void addTriedServer(HostAddress& h); bool hasTriedServer(HostAddress& h); void clearTriedServers(); void setOrigChannelForResubscribe(boost::shared_ptr& channel); bool isResubscribeRequest(); boost::shared_ptr& getOrigChannelForResubscribe(); friend std::ostream& operator<<(std::ostream& os, const PubSubData& data); private: PubSubData(); void setPreferencesForSubRequest(SubscribeRequest * subreq, const SubscriptionOptions &options); OperationType type; long txnid; std::string subscriberid; std::string topic; Message body; bool shouldClaim; int messageBound; ResponseCallbackPtr callback; SubscriptionOptions options; MessageSeqId msgid; std::tr1::unordered_set triedservers; // record the origChannel for a resubscribe request boost::shared_ptr origChannel; }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/eventdispatcher.cpp000066400000000000000000000063621244507361200267620ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "eventdispatcher.h" #include static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; const int DEFAULT_NUM_DISPATCH_THREADS = 1; IOService::IOService() { } IOService::~IOService() {} void IOService::start() { if (work.get()) { return; } work = work_ptr(new boost::asio::io_service::work(service)); } void IOService::stop() { if (!work.get()) { return; } work = work_ptr(); service.stop(); } void IOService::run() { while (true) { try { service.run(); break; } catch (std::exception &e) { LOG4CXX_ERROR(logger, "Exception in IO Service " << this << " : " << e.what()); } } } EventDispatcher::EventDispatcher(const Configuration& conf) : conf(conf), running(false), next_io_service(0) { num_threads = conf.getInt(Configuration::NUM_DISPATCH_THREADS, DEFAULT_NUM_DISPATCH_THREADS); if (0 == num_threads) { LOG4CXX_ERROR(logger, "Number of threads in dispatcher is zero"); throw std::runtime_error("number of threads in dispatcher is zero"); } for (size_t i = 0; i < num_threads; i++) { services.push_back(IOServicePtr(new IOService())); } LOG4CXX_DEBUG(logger, "Created EventDispatcher " << this); } void EventDispatcher::run_forever(IOServicePtr service, size_t idx) { LOG4CXX_INFO(logger, "Starting event dispatcher " << idx); service->run(); LOG4CXX_INFO(logger, "Event dispatcher " << idx << " done"); } void EventDispatcher::start() { if (running) { return; } for (size_t i = 0; i < num_threads; i++) { IOServicePtr service = services[i]; service->start(); // new thread thread_ptr t(new boost::thread(boost::bind(&EventDispatcher::run_forever, this, service, i))); threads.push_back(t); } running = true; } void EventDispatcher::stop() { if (!running) { return; } for (size_t i = 0; i < num_threads; i++) { services[i]->stop(); } for (size_t i = 0; i < num_threads; i++) { threads[i]->join(); } threads.clear(); running = false; } EventDispatcher::~EventDispatcher() { services.clear(); } IOServicePtr& EventDispatcher::getService() { size_t next = 0; { boost::lock_guard lock(next_lock); next = next_io_service; next_io_service = (next_io_service + 1) % num_threads; } return services[next]; } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/eventdispatcher.h000066400000000000000000000042161244507361200264230ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef EVENTDISPATCHER_H #define EVENTDISPATCHER_H #include #include #include #include #include namespace Hedwig { typedef boost::shared_ptr work_ptr; typedef boost::shared_ptr thread_ptr; class IOService; typedef boost::shared_ptr IOServicePtr; class IOService { public: IOService(); virtual ~IOService(); // start the io service void start(); // stop the io service void stop(); // run the io service void run(); inline boost::asio::io_service& getService() { return service; } private: boost::asio::io_service service; work_ptr work; }; class EventDispatcher { public: EventDispatcher(const Configuration& conf); ~EventDispatcher(); void start(); void stop(); IOServicePtr& getService(); private: void run_forever(IOServicePtr service, size_t idx); const Configuration& conf; // number of threads size_t num_threads; // running flag bool running; // pool of io_services. std::vector services; // threads std::vector threads; // next io_service used for a connection boost::mutex next_lock; std::size_t next_io_service; }; } #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/exceptions.cpp000066400000000000000000000016771244507361200257570ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include using namespace Hedwig; bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/filterablemessagehandler.cpp000066400000000000000000000031521244507361200306000ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "filterablemessagehandler.h" using namespace Hedwig; FilterableMessageHandler::FilterableMessageHandler(const MessageHandlerCallbackPtr& msgHandler, const ClientMessageFilterPtr& msgFilter) : msgHandler(msgHandler), msgFilter(msgFilter) { } FilterableMessageHandler::~FilterableMessageHandler() { } void FilterableMessageHandler::consume(const std::string& topic, const std::string& subscriberId, const Message& msg, OperationCallbackPtr& callback) { bool deliver = true; if (0 != msgFilter.get()) { deliver = msgFilter->testMessage(msg); } if (deliver) { msgHandler->consume(topic, subscriberId, msg, callback); } else { callback->operationComplete(); } } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/filterablemessagehandler.h000066400000000000000000000030771244507361200302530ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef FILTERABLE_MESSAGE_HANDLER_H #define FILTERABLE_MESSAGE_HANDLER_H #include #include #ifdef USE_BOOST_TR1 #include #else #include #endif namespace Hedwig { class FilterableMessageHandler : public MessageHandlerCallback { public: FilterableMessageHandler(const MessageHandlerCallbackPtr& msgHandler, const ClientMessageFilterPtr& msgFilter); virtual void consume(const std::string& topic, const std::string& subscriberId, const Message& msg, OperationCallbackPtr& callback); virtual ~FilterableMessageHandler(); private: const MessageHandlerCallbackPtr msgHandler; const ClientMessageFilterPtr msgFilter; }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/multiplexsubscriberimpl.cpp000066400000000000000000000522001244507361200305530ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include "multiplexsubscriberimpl.h" #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; RemoveSubscriptionCallback::RemoveSubscriptionCallback( const MultiplexDuplexChannelManagerPtr& channelManager, const MultiplexSubscriberClientChannelHandlerPtr& handler, const TopicSubscriber& ts, const OperationCallbackPtr& callback) : channelManager(channelManager), handler(handler), topicSubscriber(ts), callback(callback) { } void RemoveSubscriptionCallback::operationComplete(const ResponseBody& resp) { handler->removeActiveSubscriber(topicSubscriber); channelManager->removeSubscriptionChannelHandler(topicSubscriber, handler); callback->operationComplete(); } void RemoveSubscriptionCallback::operationFailed(const std::exception& exception) { callback->operationFailed(exception); } MultiplexSubscriberClientChannelHandler::MultiplexSubscriberClientChannelHandler( const MultiplexDuplexChannelManagerPtr& channelManager, ResponseHandlerMap& handlers) : SubscriberClientChannelHandler(boost::dynamic_pointer_cast(channelManager), handlers), mChannelManager(channelManager) { } void MultiplexSubscriberClientChannelHandler::removeActiveSubscriber( const TopicSubscriber& ts) { ActiveSubscriberPtr as; { boost::lock_guard lock(subscribers_lock); as = activeSubscribers[ts]; activeSubscribers.erase(ts); LOG4CXX_DEBUG(logger, "Removed " << ts << " from channel " << channel.get() << "."); } if (as.get()) { as->close(); } } bool MultiplexSubscriberClientChannelHandler::addActiveSubscriber( const PubSubDataPtr& op, const SubscriptionPreferencesPtr& preferences) { TopicSubscriber ts(op->getTopic(), op->getSubscriberId()); boost::lock_guard lock(subscribers_lock); ActiveSubscriberPtr subscriber = activeSubscribers[ts]; if (subscriber.get()) { // NOTE: it should not happen here, since we had subscribers mapping to // avoid two same topic subscribers alive in a client. LOG4CXX_WARN(logger, "Duplicate " << *subscriber << " has been found alive on channel " << channel.get()); return false; } subscriber = ActiveSubscriberPtr(new ActiveSubscriber(op, channel, preferences, channelManager)); activeSubscribers[ts] = subscriber; return true; } void MultiplexSubscriberClientChannelHandler::handleSubscriptionEvent( const TopicSubscriber& ts, const SubscriptionEvent event) { ActiveSubscriberPtr as = getActiveSubscriber(ts); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive for " << ts << " on channel " << channel.get() << " receiving event " << event); return; } if (!as->isResubscribeRequired() && (TOPIC_MOVED == event || SUBSCRIPTION_FORCED_CLOSED == event)) { // topic has moved if (TOPIC_MOVED == event) { // remove topic mapping channelManager->clearHostForTopic(as->getTopic(), getChannel()->getHostAddress()); } // first remove the topic subscriber from current handler removeActiveSubscriber(ts); // second remove it from the mapping mChannelManager->removeSubscriptionChannelHandler(ts, boost::dynamic_pointer_cast(shared_from_this())); } as->processEvent(ts.first, ts.second, event); } void MultiplexSubscriberClientChannelHandler::deliverMessage(const TopicSubscriber& ts, const PubSubResponsePtr& m) { ActiveSubscriberPtr as = getActiveSubscriber(ts); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive for " << ts << " on channel " << channel.get()); return; } as->deliverMessage(m); } void MultiplexSubscriberClientChannelHandler::startDelivery( const TopicSubscriber& ts, const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter) { ActiveSubscriberPtr as = getActiveSubscriber(ts); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive for " << ts << " on channel " << channel.get()); throw NotSubscribedException(); } as->startDelivery(handler, filter); } void MultiplexSubscriberClientChannelHandler::stopDelivery(const TopicSubscriber& ts) { ActiveSubscriberPtr as = getActiveSubscriber(ts); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive for " << ts << " on channel " << channel.get()); throw NotSubscribedException(); } as->stopDelivery(); } bool MultiplexSubscriberClientChannelHandler::hasSubscription(const TopicSubscriber& ts) { ActiveSubscriberPtr as = getActiveSubscriber(ts); if (!as.get()) { return false; } return ts.first == as->getTopic() && ts.second == as->getSubscriberId(); } void MultiplexSubscriberClientChannelHandler::asyncCloseSubscription( const TopicSubscriber& ts, const OperationCallbackPtr& callback) { // just remove the active subscriber ActiveSubscriberPtr as = getActiveSubscriber(ts); if (!as.get()) { LOG4CXX_DEBUG(logger, "No Active Subscriber found for " << ts << " when closing its subscription."); mChannelManager->removeSubscriptionChannelHandler(ts, boost::dynamic_pointer_cast(shared_from_this())); callback->operationComplete(); return; } RemoveSubscriptionCallback * removeCb = new RemoveSubscriptionCallback( mChannelManager, boost::dynamic_pointer_cast(shared_from_this()), ts, callback); ResponseCallbackPtr respCallback(removeCb); PubSubDataPtr data = PubSubData::forCloseSubscriptionRequest(channelManager->nextTxnId(), ts.second, ts.first, respCallback); channelManager->submitOp(data); } void MultiplexSubscriberClientChannelHandler::consume(const TopicSubscriber& ts, const MessageSeqId& messageSeqId) { ActiveSubscriberPtr as = getActiveSubscriber(ts); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found for " << ts << " alive on channel " << channel.get()); return; } as->consume(messageSeqId); } void MultiplexSubscriberClientChannelHandler::onChannelDisconnected(const DuplexChannelPtr& channel) { // Clear the subscription channel MultiplexSubscriberClientChannelHandlerPtr that = boost::dynamic_pointer_cast(shared_from_this()); // remove the channel from channel manager mChannelManager->removeSubscriptionChannelHandler( getChannel()->getHostAddress(), that); // disconnect all the subscribers alive on this channel // make a copy of active subscribers to process event, the size is estimated std::vector copyofActiveSubscribers(activeSubscribers.size()); copyofActiveSubscribers.clear(); { boost::lock_guard lock(subscribers_lock); ActiveSubscriberMap::iterator iter = activeSubscribers.begin(); for (; iter != activeSubscribers.end(); ++iter) { ActiveSubscriberPtr as = iter->second; if (as.get()) { // clear topic ownership mChannelManager->clearHostForTopic(as->getTopic(), channel->getHostAddress()); if (!as->isResubscribeRequired()) { TopicSubscriber ts(as->getTopic(), as->getSubscriberId()); // remove the subscription handler if no need to resubscribe mChannelManager->removeSubscriptionChannelHandler(ts, that); } // close the active subscriber as->close(); copyofActiveSubscribers.push_back(as); } } activeSubscribers.clear(); } // processEvent would emit subscription event to user's callback // so it would be better to not put the logic under a lock. std::vector::iterator viter = copyofActiveSubscribers.begin(); for (; viter != copyofActiveSubscribers.end(); ++viter) { ActiveSubscriberPtr as = *viter; if (as.get()) { LOG4CXX_INFO(logger, "Tell " << *as << " his channel " << channel.get() << " is disconnected."); as->processEvent(as->getTopic(), as->getSubscriberId(), TOPIC_MOVED); } } copyofActiveSubscribers.clear(); } void MultiplexSubscriberClientChannelHandler::closeHandler() { boost::lock_guard lock(subscribers_lock); ActiveSubscriberMap::iterator iter = activeSubscribers.begin(); for (; iter != activeSubscribers.end(); ++iter) { ActiveSubscriberPtr as = iter->second; if (as.get()) { as->close(); LOG4CXX_DEBUG(logger, "Closed " << *as << "."); } } } // // Subscribe Response Handler // MultiplexSubscribeResponseHandler::MultiplexSubscribeResponseHandler( const MultiplexDuplexChannelManagerPtr& channelManager) : ResponseHandler(boost::dynamic_pointer_cast(channelManager)), mChannelManager(channelManager) { } void MultiplexSubscribeResponseHandler::handleSuccessResponse( const PubSubResponsePtr& m, const PubSubDataPtr& txn, const MultiplexSubscriberClientChannelHandlerPtr& handler) { // for subscribe request, check whether is any subscription preferences received SubscriptionPreferencesPtr preferences; if (m->has_responsebody()) { const ResponseBody& respBody = m->responsebody(); if (respBody.has_subscriberesponse()) { const SubscribeResponse& resp = respBody.subscriberesponse(); if (resp.has_preferences()) { preferences = SubscriptionPreferencesPtr(new SubscriptionPreferences(resp.preferences())); } } } TopicSubscriber ts(txn->getTopic(), txn->getSubscriberId()); if (!mChannelManager->storeSubscriptionChannelHandler(ts, txn, handler)) { // found existed subscription channel handler if (txn->isResubscribeRequest()) { txn->getCallback()->operationFailed(ResubscribeException()); } else { txn->getCallback()->operationFailed(AlreadySubscribedException()); } return; } // If the subscriber has been alive on this channel if (!handler->addActiveSubscriber(txn, preferences)) { txn->getCallback()->operationFailed(AlreadySubscribedException()); return; } if (m->has_responsebody()) { txn->getCallback()->operationComplete(m->responsebody()); } else { txn->getCallback()->operationComplete(ResponseBody()); } } void MultiplexSubscribeResponseHandler::handleResponse( const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel) { if (!txn.get()) { LOG4CXX_ERROR(logger, "Invalid transaction recevied from channel " << channel.get()); return; } LOG4CXX_DEBUG(logger, "message received with status " << m->statuscode() << " from channel " << channel.get()); MultiplexSubscriberClientChannelHandlerPtr handler = boost::dynamic_pointer_cast(channel->getChannelHandler()); if (!handler.get()) { LOG4CXX_ERROR(logger, "No simple subscriber client channel handler found for channel " << channel.get() << "."); // No channel handler, but we still need to close the channel channel->close(); txn->getCallback()->operationFailed(NoChannelHandlerException()); return; } // we don't close any subscription channel when encountering subscribe failures switch (m->statuscode()) { case SUCCESS: handleSuccessResponse(m, txn, handler); break; case SERVICE_DOWN: txn->getCallback()->operationFailed(ServiceDownException()); break; case CLIENT_ALREADY_SUBSCRIBED: case TOPIC_BUSY: txn->getCallback()->operationFailed(AlreadySubscribedException()); break; case CLIENT_NOT_SUBSCRIBED: txn->getCallback()->operationFailed(NotSubscribedException()); break; case NOT_RESPONSIBLE_FOR_TOPIC: redirectRequest(m, txn, channel); break; default: LOG4CXX_ERROR(logger, "Unexpected response " << m->statuscode() << " for " << txn->getTxnId()); txn->getCallback()->operationFailed(UnexpectedResponseException()); break; } } // // Multiplex Duplex Channel Manager // MultiplexDuplexChannelManager::MultiplexDuplexChannelManager(const Configuration& conf) : DuplexChannelManager(conf) { LOG4CXX_DEBUG(logger, "Created MultiplexDuplexChannelManager " << this); } MultiplexDuplexChannelManager::~MultiplexDuplexChannelManager() { LOG4CXX_DEBUG(logger, "Destroyed MultiplexDuplexChannelManager " << this); } void MultiplexDuplexChannelManager::start() { // Add subscribe response handler subscriptionHandlers[SUBSCRIBE] = ResponseHandlerPtr(new MultiplexSubscribeResponseHandler( boost::dynamic_pointer_cast(shared_from_this()))); subscriptionHandlers[CLOSESUBSCRIPTION] = ResponseHandlerPtr(new CloseSubscriptionResponseHandler(shared_from_this())); DuplexChannelManager::start(); } void MultiplexDuplexChannelManager::close() { DuplexChannelManager::close(); subscriptionHandlers.clear(); } SubscriberClientChannelHandlerPtr MultiplexDuplexChannelManager::getSubscriptionChannelHandler(const TopicSubscriber& ts) { boost::shared_lock lock(subscribers_lock); return boost::dynamic_pointer_cast(subscribers[ts]); } DuplexChannelPtr MultiplexDuplexChannelManager::getSubscriptionChannel( const TopicSubscriber& ts, const bool /*isResubscribeRequest*/) { const HostAddress& addr = getHostForTopic(ts.first); if (addr.isNullHost()) { return DuplexChannelPtr(); } else { // we had known which hub server owned the topic DuplexChannelPtr ch = getSubscriptionChannel(addr); if (ch.get()) { return ch; } ch = createSubscriptionChannel(addr); return storeSubscriptionChannel(ch, true); } } DuplexChannelPtr MultiplexDuplexChannelManager::getSubscriptionChannel(const HostAddress& addr) { MultiplexSubscriberClientChannelHandlerPtr handler; { boost::shared_lock lock(subhandlers_lock); handler = subhandlers[addr]; } if (handler.get()) { return boost::dynamic_pointer_cast(handler->getChannel()); } else { return DuplexChannelPtr(); } } DuplexChannelPtr MultiplexDuplexChannelManager::createSubscriptionChannel(const HostAddress& addr) { // Create a multiplex subscriber channel handler MultiplexSubscriberClientChannelHandler * subscriberHandler = new MultiplexSubscriberClientChannelHandler( boost::dynamic_pointer_cast(shared_from_this()), subscriptionHandlers); ChannelHandlerPtr channelHandler(subscriberHandler); // Create a subscription channel DuplexChannelPtr channel = createChannel(dispatcher->getService(), addr, channelHandler); subscriberHandler->setChannel(boost::dynamic_pointer_cast(channel)); LOG4CXX_INFO(logger, "New multiplex subscription channel " << channel.get() << " is created to host " << addr << ", whose channel handler is " << subscriberHandler); return channel; } DuplexChannelPtr MultiplexDuplexChannelManager::storeSubscriptionChannel( const DuplexChannelPtr& ch, bool doConnect) { const HostAddress& host = ch->getHostAddress(); MultiplexSubscriberClientChannelHandlerPtr handler = boost::dynamic_pointer_cast(ch->getChannelHandler()); bool useOldCh; MultiplexSubscriberClientChannelHandlerPtr oldHandler; DuplexChannelPtr oldChannel; { boost::lock_guard lock(subhandlers_lock); oldHandler = subhandlers[host]; if (!oldHandler.get()) { subhandlers[host] = handler; useOldCh = false; } else { oldChannel = boost::dynamic_pointer_cast(oldHandler->getChannel()); useOldCh = true; } } if (useOldCh) { LOG4CXX_DEBUG(logger, "Subscription channel " << oldChannel.get() << " with handler " << oldHandler.get() << " was used to serve subscribe requests to host " << host << " so close new channel " << ch.get() << " with handler " << handler.get() << "."); handler->close(); return oldChannel; } else { if (doConnect) { ch->connect(); } LOG4CXX_DEBUG(logger, "Storing channel " << ch.get() << " with handler " << handler.get() << " for host " << host << "."); return ch; } } bool MultiplexDuplexChannelManager::removeSubscriptionChannelHandler( const TopicSubscriber& ts, const MultiplexSubscriberClientChannelHandlerPtr& handler) { boost::lock_guard lock(subscribers_lock); MultiplexSubscriberClientChannelHandlerPtr existedHandler = subscribers[ts]; if (existedHandler.get() == handler.get()) { subscribers.erase(ts); return true; } else { return false; } } bool MultiplexDuplexChannelManager::removeSubscriptionChannelHandler( const HostAddress& addr, const MultiplexSubscriberClientChannelHandlerPtr& handler) { bool removed; { boost::lock_guard lock(subhandlers_lock); MultiplexSubscriberClientChannelHandlerPtr existedHandler = subhandlers[addr]; if (existedHandler.get() == handler.get()) { subhandlers.erase(addr); removed = true; } else { removed = false; } } if (removed && handler.get()) { handler->close(); } return removed; } bool MultiplexDuplexChannelManager::storeSubscriptionChannelHandler( const TopicSubscriber& ts, const PubSubDataPtr& txn, const MultiplexSubscriberClientChannelHandlerPtr& handler) { MultiplexSubscriberClientChannelHandlerPtr other; bool success = false; bool isResubscribeRequest = txn->isResubscribeRequest(); { boost::lock_guard lock(subscribers_lock); other = subscribers[ts]; if (other.get()) { if (isResubscribeRequest) { DuplexChannelPtr& origChannel = txn->getOrigChannelForResubscribe(); const AbstractDuplexChannelPtr& otherChannel = other->getChannel(); if (origChannel.get() != otherChannel.get()) { // channel has been changed for a specific subscriber // which means the client closesub and subscribe again // when channel disconnect to resubscribe for it. // so we should not let the resubscribe succeed success = false; } else { subscribers[ts] = handler; success = true; } } else { success = false; } } else { if (isResubscribeRequest) { // if it is a resubscribe request and there is no handler found // which means a closesub has been called when resubscribing // so we should not let the resubscribe succeed success = false; } else { subscribers[ts] = handler; success = true; } } } return success; } void MultiplexDuplexChannelManager::asyncCloseSubscription( const TopicSubscriber& ts, const OperationCallbackPtr& callback) { SubscriberClientChannelHandlerPtr handler = getSubscriptionChannelHandler(ts); if (!handler.get()) { LOG4CXX_DEBUG(logger, "No subscription channel handler found for " << ts << "."); callback->operationComplete(); return; } handler->asyncCloseSubscription(ts, callback); } void MultiplexDuplexChannelManager::handoverDelivery( const TopicSubscriber& ts, const MessageHandlerCallbackPtr& msgHandler, const ClientMessageFilterPtr& filter) { SubscriberClientChannelHandlerPtr handler = getSubscriptionChannelHandler(ts); if (!handler.get()) { LOG4CXX_WARN(logger, "No subscription channel handler found for " << ts << " to handover delivery with handler " << msgHandler.get() << ", filter " << filter.get() << "."); return; } try { handler->startDelivery(ts, msgHandler, filter); } catch(const AlreadyStartDeliveryException& ase) { LOG4CXX_WARN(logger, "Other one has started delivery for " << ts << " using brand new message handler. " << "It is OK that we could give up handing over old message handler."); } catch(const std::exception& e) { LOG4CXX_WARN(logger, "Error when handing over old message handler for " << ts << " : " << e.what()); } } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/multiplexsubscriberimpl.h000066400000000000000000000166761244507361200302410ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef MULTIPLEX_SUBSCRIBE_IMPL_H #define MULTIPLEX_SUBSCRIBE_IMPL_H #include #include "subscriberimpl.h" #include "clientimpl.h" namespace Hedwig { class MultiplexDuplexChannelManager; typedef boost::shared_ptr MultiplexDuplexChannelManagerPtr; // Multiplex Subscription Channel Handler : multiple subscription per channel class MultiplexSubscriberClientChannelHandler : public SubscriberClientChannelHandler { public: MultiplexSubscriberClientChannelHandler(const MultiplexDuplexChannelManagerPtr& channelManager, ResponseHandlerMap& handlers); virtual ~MultiplexSubscriberClientChannelHandler() {} // remove a given topic subscriber void removeActiveSubscriber(const TopicSubscriber& ts); // Add the subscriber serving on this channel bool addActiveSubscriber(const PubSubDataPtr& op, const SubscriptionPreferencesPtr& preferences); virtual void handleSubscriptionEvent(const TopicSubscriber& ts, const SubscriptionEvent event); // Deliver a received message to given message handler virtual void deliverMessage(const TopicSubscriber& ts, const PubSubResponsePtr& m); // Start Delivery for a given topic subscriber virtual void startDelivery(const TopicSubscriber& ts, const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter); // Stop Delivery for a given topic subscriber virtual void stopDelivery(const TopicSubscriber& ts); // Has Subscription on the Channel virtual bool hasSubscription(const TopicSubscriber& ts); // Close Subscription for a given topic subscriber virtual void asyncCloseSubscription(const TopicSubscriber& ts, const OperationCallbackPtr& callback); // Consume message for a given topic subscriber virtual void consume(const TopicSubscriber& ts, const MessageSeqId& messageSeqId); protected: // Subscription channel disconnected: reconnect the subscription channel virtual void onChannelDisconnected(const DuplexChannelPtr& channel); virtual void closeHandler(); private: inline const ActiveSubscriberPtr& getActiveSubscriber(const TopicSubscriber& ts) { boost::shared_lock lock(subscribers_lock); return activeSubscribers[ts]; } typedef std::tr1::unordered_map ActiveSubscriberMap; ActiveSubscriberMap activeSubscribers; boost::shared_mutex subscribers_lock; const MultiplexDuplexChannelManagerPtr mChannelManager; }; typedef boost::shared_ptr MultiplexSubscriberClientChannelHandlerPtr; // // Multiplex Duplex Channel Manager // class MultiplexDuplexChannelManager : public DuplexChannelManager { public: explicit MultiplexDuplexChannelManager(const Configuration& conf); virtual ~MultiplexDuplexChannelManager(); bool storeSubscriptionChannelHandler( const TopicSubscriber& ts, const PubSubDataPtr& txn, const MultiplexSubscriberClientChannelHandlerPtr& handler); bool removeSubscriptionChannelHandler( const TopicSubscriber& ts, const MultiplexSubscriberClientChannelHandlerPtr& handler); bool removeSubscriptionChannelHandler( const HostAddress& addr, const MultiplexSubscriberClientChannelHandlerPtr& handler); // Get the subscription channel handler for a given subscription virtual SubscriberClientChannelHandlerPtr getSubscriptionChannelHandler(const TopicSubscriber& ts); // Close subscription for a given subscription virtual void asyncCloseSubscription(const TopicSubscriber& ts, const OperationCallbackPtr& callback); virtual void handoverDelivery(const TopicSubscriber& ts, const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter); // start the channel manager virtual void start(); // close the channel manager virtual void close(); protected: virtual DuplexChannelPtr getSubscriptionChannel(const TopicSubscriber& ts, const bool isResubscribeRequest); virtual DuplexChannelPtr getSubscriptionChannel(const HostAddress& addr); virtual DuplexChannelPtr createSubscriptionChannel(const HostAddress& addr); virtual DuplexChannelPtr storeSubscriptionChannel(const DuplexChannelPtr& ch, bool doConnect); private: std::tr1::unordered_map subhandlers; boost::shared_mutex subhandlers_lock; // A inverse mapping for all available topic subscribers std::tr1::unordered_map subscribers; boost::shared_mutex subscribers_lock; // Response Handlers for subscription requests ResponseHandlerMap subscriptionHandlers; }; // Subscribe Response Handler class MultiplexSubscribeResponseHandler : public ResponseHandler { public: explicit MultiplexSubscribeResponseHandler(const MultiplexDuplexChannelManagerPtr& channelManager); virtual ~MultiplexSubscribeResponseHandler() {} virtual void handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel); private: void handleSuccessResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const MultiplexSubscriberClientChannelHandlerPtr& handler); const MultiplexDuplexChannelManagerPtr mChannelManager; }; // Callback delegation to remove subscription from a channel class RemoveSubscriptionCallback : public ResponseCallback { public: explicit RemoveSubscriptionCallback( const MultiplexDuplexChannelManagerPtr& channelManager, const MultiplexSubscriberClientChannelHandlerPtr& handler, const TopicSubscriber& ts, const OperationCallbackPtr& callback); virtual void operationComplete(const ResponseBody& response); virtual void operationFailed(const std::exception& exception); private: const MultiplexDuplexChannelManagerPtr channelManager; const MultiplexSubscriberClientChannelHandlerPtr handler; const TopicSubscriber topicSubscriber; const OperationCallbackPtr callback; }; } /* Namespace Hedwig */ #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/publisherimpl.cpp000066400000000000000000000117451244507361200264520ustar00rootroot00000000000000 /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include "publisherimpl.h" #include "channel.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; PublishResponseAdaptor::PublishResponseAdaptor(const PublishResponseCallbackPtr& pubCallback) : pubCallback(pubCallback) { } void PublishResponseAdaptor::operationComplete(const ResponseBody& result) { if (result.has_publishresponse()) { PublishResponse *resp = new PublishResponse(); resp->CopyFrom(result.publishresponse()); PublishResponsePtr respPtr(resp); pubCallback->operationComplete(respPtr); } else { // return empty response pubCallback->operationComplete(PublishResponsePtr()); } } void PublishResponseAdaptor::operationFailed(const std::exception& exception) { pubCallback->operationFailed(exception); } PublisherImpl::PublisherImpl(const DuplexChannelManagerPtr& channelManager) : channelManager(channelManager) { } PublishResponsePtr PublisherImpl::publish(const std::string& topic, const Message& message) { SyncCallback* cb = new SyncCallback( channelManager->getConfiguration().getInt(Configuration::SYNC_REQUEST_TIMEOUT, DEFAULT_SYNC_REQUEST_TIMEOUT)); PublishResponseCallbackPtr callback(cb); asyncPublishWithResponse(topic, message, callback); cb->wait(); cb->throwExceptionIfNeeded(); return cb->getResult(); } PublishResponsePtr PublisherImpl::publish(const std::string& topic, const std::string& message) { Message msg; msg.set_body(message); return publish(topic, msg); } void PublisherImpl::asyncPublish(const std::string& topic, const Message& message, const OperationCallbackPtr& callback) { // use release after callback to release the channel after the callback is called ResponseCallbackPtr respCallback(new ResponseCallbackAdaptor(callback)); doPublish(topic, message, respCallback); } void PublisherImpl::asyncPublish(const std::string& topic, const std::string& message, const OperationCallbackPtr& callback) { Message msg; msg.set_body(message); asyncPublish(topic, msg, callback); } void PublisherImpl::asyncPublishWithResponse(const std::string& topic, const Message& message, const PublishResponseCallbackPtr& callback) { ResponseCallbackPtr respCallback(new PublishResponseAdaptor(callback)); doPublish(topic, message, respCallback); } void PublisherImpl::doPublish(const std::string& topic, const Message& message, const ResponseCallbackPtr& callback) { PubSubDataPtr data = PubSubData::forPublishRequest(channelManager->nextTxnId(), topic, message, callback); LOG4CXX_INFO(logger, "Publish message (topic:" << data->getTopic() << ", txn:" << data->getTxnId() << ")."); channelManager->submitOp(data); } // // Publish Response Handler // PublishResponseHandler::PublishResponseHandler(const DuplexChannelManagerPtr& channelManager) : ResponseHandler(channelManager) { LOG4CXX_DEBUG(logger, "Created PublishResponseHandler for ChannelManager " << channelManager.get()); } void PublishResponseHandler::handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel) { switch (m->statuscode()) { case SUCCESS: if (m->has_responsebody()) { txn->getCallback()->operationComplete(m->responsebody()); } else { txn->getCallback()->operationComplete(ResponseBody()); } break; case SERVICE_DOWN: LOG4CXX_ERROR(logger, "Server responsed with SERVICE_DOWN for " << txn->getTxnId()); txn->getCallback()->operationFailed(ServiceDownException()); break; case NOT_RESPONSIBLE_FOR_TOPIC: redirectRequest(m, txn, channel); break; default: LOG4CXX_ERROR(logger, "Unexpected response " << m->statuscode() << " for " << txn->getTxnId()); txn->getCallback()->operationFailed(UnexpectedResponseException()); break; } } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/publisherimpl.h000066400000000000000000000047311244507361200261140ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef PUBLISHER_IMPL_H #define PUBLISHER_IMPL_H #include #include #include "clientimpl.h" #include "data.h" namespace Hedwig { class PublishResponseAdaptor : public ResponseCallback { public: PublishResponseAdaptor(const PublishResponseCallbackPtr& pubCallback); void operationComplete(const ResponseBody & result); void operationFailed(const std::exception& exception); private: PublishResponseCallbackPtr pubCallback; }; class PublishResponseHandler : public ResponseHandler { public: PublishResponseHandler(const DuplexChannelManagerPtr& channelManager); virtual ~PublishResponseHandler() {}; virtual void handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel); }; class PublisherImpl : public Publisher { public: PublisherImpl(const DuplexChannelManagerPtr& channelManager); PublishResponsePtr publish(const std::string& topic, const std::string& message); PublishResponsePtr publish(const std::string& topic, const Message& message); void asyncPublish(const std::string& topic, const std::string& message, const OperationCallbackPtr& callback); void asyncPublish(const std::string& topic, const Message& message, const OperationCallbackPtr& callback); void asyncPublishWithResponse(const std::string& topic, const Message& messsage, const PublishResponseCallbackPtr& callback); void doPublish(const std::string& topic, const Message& message, const ResponseCallbackPtr& callback); private: DuplexChannelManagerPtr channelManager; }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/simplesubscriberimpl.cpp000066400000000000000000000454511244507361200300330ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include "simplesubscriberimpl.h" #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; const int DEFAULT_MAX_MESSAGE_QUEUE_SIZE = 10; SimpleActiveSubscriber::SimpleActiveSubscriber(const PubSubDataPtr& data, const AbstractDuplexChannelPtr& channel, const SubscriptionPreferencesPtr& preferences, const DuplexChannelManagerPtr& channelManager) : ActiveSubscriber(data, channel, preferences, channelManager) { maxQueueLen = channelManager->getConfiguration().getInt(Configuration::MAX_MESSAGE_QUEUE_SIZE, DEFAULT_MAX_MESSAGE_QUEUE_SIZE); } void SimpleActiveSubscriber::doStartDelivery(const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter) { ActiveSubscriber::doStartDelivery(handler, filter); // put channel#startReceiving out of lock of subscriber#queue_lock // otherwise we enter dead lock // subscriber#startDelivery(subscriber#queue_lock) => // channel#startReceiving(channel#receiving_lock) => channel->startReceiving(); } void SimpleActiveSubscriber::doStopDelivery() { channel->stopReceiving(); } void SimpleActiveSubscriber::queueMessage(const PubSubResponsePtr& m) { ActiveSubscriber::queueMessage(m); if (queue.size() >= maxQueueLen) { channel->stopReceiving(); } } CloseSubscriptionCallback::CloseSubscriptionCallback(const ActiveSubscriberPtr& activeSubscriber, const SubscriptionEvent event) : activeSubscriber(activeSubscriber), event(event) { } void CloseSubscriptionCallback::operationComplete() { finish(); } void CloseSubscriptionCallback::operationFailed(const std::exception& e) { finish(); } void CloseSubscriptionCallback::finish() { // Process the disconnect logic after cleaning up activeSubscriber->processEvent(activeSubscriber->getTopic(), activeSubscriber->getSubscriberId(), event); } SimpleSubscriberClientChannelHandler::SimpleSubscriberClientChannelHandler( const DuplexChannelManagerPtr& channelManager, ResponseHandlerMap& handlers) : SubscriberClientChannelHandler(channelManager, handlers) { } bool SimpleSubscriberClientChannelHandler::setActiveSubscriber( const PubSubDataPtr& op, const SubscriptionPreferencesPtr& preferences) { boost::lock_guard lock(subscriber_lock); if (subscriber.get()) { LOG4CXX_ERROR(logger, *subscriber << " has been found alive on channel " << channel.get()); return false; } subscriber = ActiveSubscriberPtr(new SimpleActiveSubscriber(op, channel, preferences, channelManager)); return true; } void SimpleSubscriberClientChannelHandler::handleSubscriptionEvent( const TopicSubscriber& ts, const SubscriptionEvent event) { ActiveSubscriberPtr as = getActiveSubscriber(); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive on channel " << channel.get() << " receiving subscription event " << event); return; } if (!as->isResubscribeRequired() && (TOPIC_MOVED == event || SUBSCRIPTION_FORCED_CLOSED == event)) { // topic has moved if (TOPIC_MOVED == event) { // remove topic mapping channelManager->clearHostForTopic(as->getTopic(), getChannel()->getHostAddress()); } // close subscription to clean status OperationCallbackPtr closeCb(new CloseSubscriptionCallback(as, event)); TopicSubscriber ts(as->getTopic(), as->getSubscriberId()); channelManager->asyncCloseSubscription(ts, closeCb); } else { as->processEvent(ts.first, ts.second, event); } } void SimpleSubscriberClientChannelHandler::deliverMessage(const TopicSubscriber& ts, const PubSubResponsePtr& m) { ActiveSubscriberPtr as = getActiveSubscriber(); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive on channel " << channel.get()); return; } as->deliverMessage(m); } void SimpleSubscriberClientChannelHandler::startDelivery(const TopicSubscriber& ts, const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter) { ActiveSubscriberPtr as = getActiveSubscriber(); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive on channel " << channel.get()); throw NotSubscribedException(); } as->startDelivery(handler, filter); } void SimpleSubscriberClientChannelHandler::stopDelivery(const TopicSubscriber& ts) { ActiveSubscriberPtr as = getActiveSubscriber(); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive on channel " << channel.get()); throw NotSubscribedException(); } as->stopDelivery(); } bool SimpleSubscriberClientChannelHandler::hasSubscription(const TopicSubscriber& ts) { ActiveSubscriberPtr as = getActiveSubscriber(); if (!as.get()) { return false; } return ts.first == as->getTopic() && ts.second == as->getSubscriberId(); } void SimpleSubscriberClientChannelHandler::asyncCloseSubscription( const TopicSubscriber& ts, const OperationCallbackPtr& callback) { // just remove the active subscriber ActiveSubscriberPtr as = getActiveSubscriber(); if (as.get()) { as->close(); clearActiveSubscriber(); } callback->operationComplete(); } void SimpleSubscriberClientChannelHandler::consume(const TopicSubscriber& ts, const MessageSeqId& messageSeqId) { ActiveSubscriberPtr as = getActiveSubscriber(); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found alive on channel " << channel.get()); return; } as->consume(messageSeqId); } void SimpleSubscriberClientChannelHandler::onChannelDisconnected( const DuplexChannelPtr& channel) { ActiveSubscriberPtr as = getActiveSubscriber(); if (!as.get()) { LOG4CXX_ERROR(logger, "No Active Subscriber found when channel " << channel.get() << " disconnected."); // no active subscriber found, but we still need to close the channel channelManager->removeChannel(channel); return; } // Clear the topic owner ship channelManager->clearHostForTopic(as->getTopic(), channel->getHostAddress()); // When the channel disconnected, if resubscribe is required, we would just // cleanup the old channel when resubscribe succeed. // Otherwise, we would cleanup the old channel then notify with a TOPIC_MOVED event LOG4CXX_INFO(logger, "Tell " << *as << " his channel " << channel.get() << " is disconnected."); if (!as->isResubscribeRequired()) { OperationCallbackPtr closeCb(new CloseSubscriptionCallback(as, TOPIC_MOVED)); TopicSubscriber ts(as->getTopic(), as->getSubscriberId()); channelManager->asyncCloseSubscription(ts, closeCb); } else { as->processEvent(as->getTopic(), as->getSubscriberId(), TOPIC_MOVED); } } void SimpleSubscriberClientChannelHandler::closeHandler() { // just remove the active subscriber ActiveSubscriberPtr as = getActiveSubscriber(); if (as.get()) { as->close(); clearActiveSubscriber(); LOG4CXX_DEBUG(logger, "Closed " << *as << "."); } } // // Subscribe Response Handler // SimpleSubscribeResponseHandler::SimpleSubscribeResponseHandler( const SimpleDuplexChannelManagerPtr& channelManager) : ResponseHandler(boost::dynamic_pointer_cast(channelManager)), sChannelManager(channelManager) { } void SimpleSubscribeResponseHandler::handleSuccessResponse( const PubSubResponsePtr& m, const PubSubDataPtr& txn, const SimpleSubscriberClientChannelHandlerPtr& handler) { // for subscribe request, check whether is any subscription preferences received SubscriptionPreferencesPtr preferences; if (m->has_responsebody()) { const ResponseBody& respBody = m->responsebody(); if (respBody.has_subscriberesponse()) { const SubscribeResponse& resp = respBody.subscriberesponse(); if (resp.has_preferences()) { preferences = SubscriptionPreferencesPtr(new SubscriptionPreferences(resp.preferences())); } } } handler->setActiveSubscriber(txn, preferences); TopicSubscriber ts(txn->getTopic(), txn->getSubscriberId()); if (!sChannelManager->storeSubscriptionChannelHandler(ts, txn, handler)) { // found existed subscription channel handler handler->close(); if (txn->isResubscribeRequest()) { txn->getCallback()->operationFailed(ResubscribeException()); } else { txn->getCallback()->operationFailed(AlreadySubscribedException()); } return; } if (m->has_responsebody()) { txn->getCallback()->operationComplete(m->responsebody()); } else { txn->getCallback()->operationComplete(ResponseBody()); } } void SimpleSubscribeResponseHandler::handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel) { if (!txn.get()) { LOG4CXX_ERROR(logger, "Invalid transaction recevied from channel " << channel.get()); return; } LOG4CXX_DEBUG(logger, "message received with status " << m->statuscode() << " from channel " << channel.get()); SimpleSubscriberClientChannelHandlerPtr handler = boost::dynamic_pointer_cast(channel->getChannelHandler()); if (!handler.get()) { LOG4CXX_ERROR(logger, "No simple subscriber client channel handler found for channel " << channel.get() << "."); // No channel handler, but we still need to close the channel channel->close(); txn->getCallback()->operationFailed(NoChannelHandlerException()); return; } if (SUCCESS != m->statuscode()) { // Subscribe request doesn't succeed, we close the handle and its binding channel handler->close(); } switch (m->statuscode()) { case SUCCESS: handleSuccessResponse(m, txn, handler); break; case SERVICE_DOWN: txn->getCallback()->operationFailed(ServiceDownException()); break; case CLIENT_ALREADY_SUBSCRIBED: case TOPIC_BUSY: txn->getCallback()->operationFailed(AlreadySubscribedException()); break; case CLIENT_NOT_SUBSCRIBED: txn->getCallback()->operationFailed(NotSubscribedException()); break; case NOT_RESPONSIBLE_FOR_TOPIC: redirectRequest(m, txn, channel); break; default: LOG4CXX_ERROR(logger, "Unexpected response " << m->statuscode() << " for " << txn->getTxnId()); txn->getCallback()->operationFailed(UnexpectedResponseException()); break; } } // // Simple Duplex Channel Manager // SimpleDuplexChannelManager::SimpleDuplexChannelManager(const Configuration& conf) : DuplexChannelManager(conf) { LOG4CXX_DEBUG(logger, "Created SimpleDuplexChannelManager " << this); } SimpleDuplexChannelManager::~SimpleDuplexChannelManager() { LOG4CXX_DEBUG(logger, "Destroyed SimpleDuplexChannelManager " << this); } void SimpleDuplexChannelManager::start() { // Add subscribe response handler subscriptionHandlers[SUBSCRIBE] = ResponseHandlerPtr(new SimpleSubscribeResponseHandler( boost::dynamic_pointer_cast(shared_from_this()))); DuplexChannelManager::start(); } void SimpleDuplexChannelManager::close() { DuplexChannelManager::close(); subscriptionHandlers.clear(); } SubscriberClientChannelHandlerPtr SimpleDuplexChannelManager::getSubscriptionChannelHandler(const TopicSubscriber& ts) { return boost::dynamic_pointer_cast( getSimpleSubscriptionChannelHandler(ts)); } const SimpleSubscriberClientChannelHandlerPtr& SimpleDuplexChannelManager::getSimpleSubscriptionChannelHandler(const TopicSubscriber& ts) { boost::shared_lock lock(topicsubscriber2handler_lock); return topicsubscriber2handler[ts]; } DuplexChannelPtr SimpleDuplexChannelManager::getSubscriptionChannel( const TopicSubscriber& ts, const bool isResubscribeRequest) { SimpleSubscriberClientChannelHandlerPtr handler; // for resubscribe request, we forced a new subscription channel if (!isResubscribeRequest) { handler = getSimpleSubscriptionChannelHandler(ts); } // found a live subscription channel if (handler.get()) { return boost::dynamic_pointer_cast(handler->getChannel()); } const HostAddress& addr = getHostForTopic(ts.first); if (addr.isNullHost()) { return DuplexChannelPtr(); } else { // we had known which hub server owned the topic DuplexChannelPtr ch = getSubscriptionChannel(addr); if (ch.get()) { return ch; } ch = createSubscriptionChannel(addr); return storeSubscriptionChannel(ch, true); } } DuplexChannelPtr SimpleDuplexChannelManager::getSubscriptionChannel(const HostAddress& addr) { // for simple subscription channel, we established a new channel each time return DuplexChannelPtr(); } DuplexChannelPtr SimpleDuplexChannelManager::createSubscriptionChannel(const HostAddress& addr) { // Create a simple subscriber channel handler SimpleSubscriberClientChannelHandler * subscriberHandler = new SimpleSubscriberClientChannelHandler( boost::dynamic_pointer_cast(shared_from_this()), subscriptionHandlers); ChannelHandlerPtr channelHandler(subscriberHandler); // Create a subscription channel DuplexChannelPtr channel = createChannel(dispatcher->getService(), addr, channelHandler); subscriberHandler->setChannel(boost::dynamic_pointer_cast(channel)); LOG4CXX_INFO(logger, "New subscription channel " << channel.get() << " is created to host " << addr << ", whose channel handler is " << subscriberHandler); return channel; } DuplexChannelPtr SimpleDuplexChannelManager::storeSubscriptionChannel(const DuplexChannelPtr& ch, bool doConnect) { // for simple duplex channel manager // we just store subscription channel handler until subscribe successfully if (doConnect) { ch->connect(); } return ch; } bool SimpleDuplexChannelManager::storeSubscriptionChannelHandler( const TopicSubscriber& ts, const PubSubDataPtr& txn, const SimpleSubscriberClientChannelHandlerPtr& handler) { SimpleSubscriberClientChannelHandlerPtr other; bool success = false; bool isResubscribeRequest = txn->isResubscribeRequest(); { boost::lock_guard lock(topicsubscriber2handler_lock); other = topicsubscriber2handler[ts]; if (other.get()) { if (isResubscribeRequest) { DuplexChannelPtr& origChannel = txn->getOrigChannelForResubscribe(); const AbstractDuplexChannelPtr& otherChannel = other->getChannel(); if (origChannel.get() != otherChannel.get()) { // channel has been changed for a specific subscriber // which means the client closesub and subscribe again // when channel disconnect to resubscribe for it. // so we should not let the resubscribe succeed success = false; } else { topicsubscriber2handler[ts] = handler; success = true; } } else { success = false; } } else { if (isResubscribeRequest) { // if it is a resubscribe request and there is no handler found // which means a closesub has been called when resubscribing // so we should not let the resubscribe succeed success = false; } else { topicsubscriber2handler[ts] = handler; success = true; } } } if (isResubscribeRequest && success && other.get()) { // the old handler is evicted due to resubscribe succeed // so it is the time to close the old disconnected channel now other->close(); } return success; } void SimpleDuplexChannelManager::asyncCloseSubscription(const TopicSubscriber& ts, const OperationCallbackPtr& callback) { SimpleSubscriberClientChannelHandlerPtr handler; { boost::lock_guard lock(topicsubscriber2handler_lock); handler = topicsubscriber2handler[ts]; topicsubscriber2handler.erase(ts); LOG4CXX_DEBUG(logger, "CloseSubscription:: remove subscriber channel handler for (topic:" << ts.first << ", subscriber:" << ts.second << ")."); } if (handler.get() != 0) { handler->close(); } callback->operationComplete(); } void SimpleDuplexChannelManager::handoverDelivery(const TopicSubscriber& ts, const MessageHandlerCallbackPtr& msgHandler, const ClientMessageFilterPtr& filter) { SimpleSubscriberClientChannelHandlerPtr handler; { boost::shared_lock lock(topicsubscriber2handler_lock); handler = topicsubscriber2handler[ts]; } if (!handler.get()) { LOG4CXX_WARN(logger, "No channel handler found for (topic:" << ts.first << ", subscriber:" << ts.second << ") to handover delivery with handler " << msgHandler.get() << ", filter " << filter.get() << "."); return; } try { handler->startDelivery(ts, msgHandler, filter); } catch(const AlreadyStartDeliveryException& ase) { LOG4CXX_WARN(logger, "Other one has started delivery for (topic:" << ts.first << ", subscriber:" << ts.second << ") using brand new message handler. " << "It is OK that we could give up handing over old message handler."); } catch(const std::exception& e) { LOG4CXX_WARN(logger, "Error when handing over old message handler for (topic:" << ts.first << ", subscriber:" << ts.second << ") : " << e.what()); } } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/simplesubscriberimpl.h000066400000000000000000000165071244507361200275000ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef SIMPLE_SUBSCRIBE_IMPL_H #define SIMPLE_SUBSCRIBE_IMPL_H #include #include "subscriberimpl.h" #include "clientimpl.h" namespace Hedwig { class SimpleActiveSubscriber : public ActiveSubscriber { public: SimpleActiveSubscriber(const PubSubDataPtr& data, const AbstractDuplexChannelPtr& channel, const SubscriptionPreferencesPtr& preferences, const DuplexChannelManagerPtr& channelManager); protected: virtual void doStartDelivery(const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter); // Stop Delivery virtual void doStopDelivery(); // Queue message when message handler is not ready virtual void queueMessage(const PubSubResponsePtr& m); private: std::size_t maxQueueLen; }; class CloseSubscriptionCallback : public OperationCallback { public: explicit CloseSubscriptionCallback(const ActiveSubscriberPtr& activeSubscriber, const SubscriptionEvent event); virtual void operationComplete(); virtual void operationFailed(const std::exception& exception); private: void finish(); const ActiveSubscriberPtr activeSubscriber; const SubscriptionEvent event; }; // Simple Subscription Channel Handler : One subscription per channel class SimpleSubscriberClientChannelHandler : public SubscriberClientChannelHandler { public: SimpleSubscriberClientChannelHandler(const DuplexChannelManagerPtr& channelManager, ResponseHandlerMap& handlers); virtual ~SimpleSubscriberClientChannelHandler() {} // Set the subscriber serving on this channel bool setActiveSubscriber(const PubSubDataPtr& op, const SubscriptionPreferencesPtr& preferences); virtual void handleSubscriptionEvent(const TopicSubscriber& ts, const SubscriptionEvent event); // Deliver a received message to given message handler virtual void deliverMessage(const TopicSubscriber& ts, const PubSubResponsePtr& m); // Start Delivery for a given topic subscriber virtual void startDelivery(const TopicSubscriber& ts, const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter); // Stop Delivery for a given topic subscriber virtual void stopDelivery(const TopicSubscriber& ts); // Has Subscription on the Channel virtual bool hasSubscription(const TopicSubscriber& ts); // Close Subscription for a given topic subscriber virtual void asyncCloseSubscription(const TopicSubscriber& ts, const OperationCallbackPtr& callback); // Consume message for a given topic subscriber virtual void consume(const TopicSubscriber& ts, const MessageSeqId& messageSeqId); protected: // Subscription channel disconnected: reconnect the subscription channel virtual void onChannelDisconnected(const DuplexChannelPtr& channel); virtual void closeHandler(); private: inline void clearActiveSubscriber() { boost::lock_guard lock(subscriber_lock); subscriber = ActiveSubscriberPtr(); } inline const ActiveSubscriberPtr& getActiveSubscriber() { boost::shared_lock lock(subscriber_lock); return subscriber; } ActiveSubscriberPtr subscriber; boost::shared_mutex subscriber_lock; }; typedef boost::shared_ptr SimpleSubscriberClientChannelHandlerPtr; // // Simple Duplex Channel Manager // class SimpleDuplexChannelManager : public DuplexChannelManager { public: explicit SimpleDuplexChannelManager(const Configuration& conf); virtual ~SimpleDuplexChannelManager(); bool storeSubscriptionChannelHandler(const TopicSubscriber& ts, const PubSubDataPtr& txn, const SimpleSubscriberClientChannelHandlerPtr& handler); // Get the subscription channel handler for a given subscription virtual SubscriberClientChannelHandlerPtr getSubscriptionChannelHandler(const TopicSubscriber& ts); // Close subscription for a given subscription virtual void asyncCloseSubscription(const TopicSubscriber& ts, const OperationCallbackPtr& callback); virtual void handoverDelivery(const TopicSubscriber& ts, const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter); // start the channel manager virtual void start(); // close the channel manager virtual void close(); protected: virtual DuplexChannelPtr getSubscriptionChannel(const TopicSubscriber& ts, const bool isResubscribeRequest); virtual DuplexChannelPtr getSubscriptionChannel(const HostAddress& addr); virtual DuplexChannelPtr createSubscriptionChannel(const HostAddress& addr); virtual DuplexChannelPtr storeSubscriptionChannel(const DuplexChannelPtr& ch, bool doConnect); private: const SimpleSubscriberClientChannelHandlerPtr& getSimpleSubscriptionChannelHandler(const TopicSubscriber& ts); std::tr1::unordered_map topicsubscriber2handler; boost::shared_mutex topicsubscriber2handler_lock; // Response Handlers for subscription requests ResponseHandlerMap subscriptionHandlers; }; typedef boost::shared_ptr SimpleDuplexChannelManagerPtr; // Subscribe Response Handler class SimpleSubscribeResponseHandler : public ResponseHandler { public: explicit SimpleSubscribeResponseHandler( const SimpleDuplexChannelManagerPtr& channelManager); virtual ~SimpleSubscribeResponseHandler() {} virtual void handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel); private: void handleSuccessResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const SimpleSubscriberClientChannelHandlerPtr& handler); const SimpleDuplexChannelManagerPtr sChannelManager; }; } /* Namespace Hedwig */ #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/subscriberimpl.cpp000066400000000000000000000650111244507361200266130ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include #include #include "subscriberimpl.h" #include "util.h" #include "channel.h" #include "filterablemessagehandler.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; const int DEFAULT_MESSAGE_CONSUME_RETRY_WAIT_TIME = 5000; const int DEFAULT_SUBSCRIBER_CONSUME_RETRY_WAIT_TIME = 5000; const int DEFAULT_RECONNECT_SUBSCRIBE_RETRY_WAIT_TIME = 5000; const bool DEFAULT_SUBSCRIBER_AUTOCONSUME = true; const int DEFAULT_SUBSCRIPTION_MESSAGE_BOUND = 0; static const std::type_info& RESUBSCRIBE_EXCEPTION_TYPEID = typeid(ResubscribeException()); ConsumeWriteCallback::ConsumeWriteCallback(const ActiveSubscriberPtr& activeSubscriber, const PubSubDataPtr& data, int retrywait) : activeSubscriber(activeSubscriber), data(data), retrywait(retrywait) { } ConsumeWriteCallback::~ConsumeWriteCallback() { } /* static */ void ConsumeWriteCallback::timerComplete( const ActiveSubscriberPtr& activeSubscriber, const PubSubDataPtr& data, const boost::system::error_code& error) { if (error) { // shutting down return; } activeSubscriber->consume(data->getMessageSeqId()); } void ConsumeWriteCallback::operationComplete() { LOG4CXX_DEBUG(logger, "Successfully wrote consume transaction: " << data->getTxnId()); } void ConsumeWriteCallback::operationFailed(const std::exception& exception) { LOG4CXX_ERROR(logger, "Error writing consume request (topic:" << data->getTopic() << ", subscriber:" << data->getSubscriberId() << ", txn:" << data->getTxnId() << ") : " << exception.what() << ", will be tried in " << retrywait << " milliseconds"); boost::asio::deadline_timer t(activeSubscriber->getChannel()->getService(), boost::posix_time::milliseconds(retrywait)); } SubscriberConsumeCallback::SubscriberConsumeCallback(const DuplexChannelManagerPtr& channelManager, const ActiveSubscriberPtr& activeSubscriber, const PubSubResponsePtr& m) : channelManager(channelManager), activeSubscriber(activeSubscriber), m(m) { } void SubscriberConsumeCallback::operationComplete() { LOG4CXX_DEBUG(logger, "ConsumeCallback::operationComplete " << *activeSubscriber); if (channelManager->getConfiguration().getBool(Configuration::SUBSCRIBER_AUTOCONSUME, DEFAULT_SUBSCRIBER_AUTOCONSUME)) { activeSubscriber->consume(m->message().msgid()); } } /* static */ void SubscriberConsumeCallback::timerComplete( const ActiveSubscriberPtr activeSubscriber, const PubSubResponsePtr m, const boost::system::error_code& error) { if (error) { return; } activeSubscriber->deliverMessage(m); } void SubscriberConsumeCallback::operationFailed(const std::exception& exception) { LOG4CXX_ERROR(logger, "ConsumeCallback::operationFailed " << *activeSubscriber); int retrywait = channelManager->getConfiguration() .getInt(Configuration::SUBSCRIBER_CONSUME_RETRY_WAIT_TIME, DEFAULT_SUBSCRIBER_CONSUME_RETRY_WAIT_TIME); LOG4CXX_ERROR(logger, "Error passing message to client for " << *activeSubscriber << " error: " << exception.what() << " retrying in " << retrywait << " Microseconds"); // We leverage same io service for retrying delivering messages. AbstractDuplexChannelPtr ch = activeSubscriber->getChannel(); boost::asio::deadline_timer t(ch->getService(), boost::posix_time::milliseconds(retrywait)); t.async_wait(boost::bind(&SubscriberConsumeCallback::timerComplete, activeSubscriber, m, boost::asio::placeholders::error)); } CloseSubscriptionForUnsubscribeCallback::CloseSubscriptionForUnsubscribeCallback( const DuplexChannelManagerPtr& channelManager, const std::string& topic, const std::string& subscriberId, const OperationCallbackPtr& unsubCb) : channelManager(channelManager), topic(topic), subscriberId(subscriberId), unsubCb(unsubCb) { } void CloseSubscriptionForUnsubscribeCallback::operationComplete() { ResponseCallbackPtr respCallback(new ResponseCallbackAdaptor(unsubCb)); PubSubDataPtr data = PubSubData::forUnsubscribeRequest(channelManager->nextTxnId(), subscriberId, topic, respCallback); // submit the unsubscribe request channelManager->submitOp(data); } void CloseSubscriptionForUnsubscribeCallback::operationFailed(const std::exception& exception) { unsubCb->operationFailed(exception); } ResubscribeCallback::ResubscribeCallback(const ActiveSubscriberPtr& activeSubscriber) : activeSubscriber(activeSubscriber) { } void ResubscribeCallback::operationComplete(const ResponseBody & resp) { // handover delivery to resubscribed subscriber. activeSubscriber->handoverDelivery(); } void ResubscribeCallback::operationFailed(const std::exception& exception) { if (RESUBSCRIBE_EXCEPTION_TYPEID == typeid(exception)) { // it might be caused by closesub when resubscribing. // so we don't need to retry resubscribe again LOG4CXX_WARN(logger, "Failed to resubscribe " << *activeSubscriber << " : but it is caused by closesub when resubscribing. " << "so we don't need to retry subscribe again."); return; } LOG4CXX_ERROR(logger, "Failed to resubscribe " << *activeSubscriber << ", will retry later : " << exception.what()); activeSubscriber->resubscribe(); } ActiveSubscriber::ActiveSubscriber(const PubSubDataPtr& data, const AbstractDuplexChannelPtr& channel, const SubscriptionPreferencesPtr& preferences, const DuplexChannelManagerPtr& channelManager) : channel(channel), deliverystate(STOPPED_DELIVERY), origData(data), preferences(preferences), channelManager(channelManager), should_wait(false) { LOG4CXX_DEBUG(logger, "Creating ActiveSubscriber " << this << " for (topic:" << data->getTopic() << ", subscriber:" << data->getSubscriberId() << ")."); } const std::string& ActiveSubscriber::getTopic() const { return origData->getTopic(); } const std::string& ActiveSubscriber::getSubscriberId() const { return origData->getSubscriberId(); } void ActiveSubscriber::deliverMessage(const PubSubResponsePtr& m) { boost::lock_guard lock(queue_lock); LOG4CXX_INFO(logger, "Message received (topic:" << origData->getTopic() << ", subscriberId:" << origData->getSubscriberId() << ", msgId:" << m->message().msgid().localcomponent() << ") from channel " << channel.get()); if (this->handler.get()) { OperationCallbackPtr callback(new SubscriberConsumeCallback(channelManager, shared_from_this(), m)); this->handler->consume(origData->getTopic(), origData->getSubscriberId(), m->message(), callback); } else { queueMessage(m); } } void ActiveSubscriber::queueMessage(const PubSubResponsePtr& m) { queue.push_back(m); } void ActiveSubscriber::startDelivery(const MessageHandlerCallbackPtr& origHandler, const ClientMessageFilterPtr& origFilter) { // check delivery state to avoid dealock when calling startdelivery/stopdelivery // in message handler. // STOPPED_DELIVERY => STARTED_DELIVERY (only one could start delivery) { boost::lock_guard lock(deliverystate_lock); if (STARTED_DELIVERY == deliverystate) { LOG4CXX_ERROR(logger, *this << " has started delivery with message handler " << this->handler.get()); throw AlreadyStartDeliveryException(); } else if (STARTING_DELIVERY == deliverystate) { LOG4CXX_ERROR(logger, *this << " is starting delivery by other one now."); throw StartingDeliveryException(); } deliverystate = STARTING_DELIVERY; } try { doStartDelivery(origHandler, origFilter); // STARTING_DELIVERY => STARTED_DELIVERY setDeliveryState(STARTED_DELIVERY); } catch (const std::exception& e) { // STARTING_DELIVERY => STOPPED_DELIVERY setDeliveryState(STOPPED_DELIVERY); throw e; } } void ActiveSubscriber::doStartDelivery(const MessageHandlerCallbackPtr& origHandler, const ClientMessageFilterPtr& origFilter) { MessageHandlerCallbackPtr handler; // origHandler & origFilter has been passed validation. If origFilter is null, // we start delivery w/o message filtering. or If the preferences is null, which // means we connected to an old version hub server, also starts w/o message filtering if (origFilter.get() && preferences.get()) { origFilter->setSubscriptionPreferences(origData->getTopic(), origData->getSubscriberId(), preferences); handler = MessageHandlerCallbackPtr(new FilterableMessageHandler(origHandler, origFilter)); } else { handler = origHandler; } { boost::lock_guard lock(queue_lock); if (this->handler.get()) { LOG4CXX_ERROR(logger, *this << " has started delivery with message handler " << this->handler.get()); throw AlreadyStartDeliveryException(); } if (!handler.get()) { // no message handler callback LOG4CXX_WARN(logger, *this << " try to start an empty message handler"); return; } this->handler = handler; // store the original filter and handler this->origHandler = origHandler; this->origFilter = origFilter; while (!queue.empty()) { PubSubResponsePtr m = queue.front(); queue.pop_front(); OperationCallbackPtr callback(new SubscriberConsumeCallback(channelManager, shared_from_this(), m)); this->handler->consume(origData->getTopic(), origData->getSubscriberId(), m->message(), callback); } } LOG4CXX_INFO(logger, *this << " #startDelivery to receive messages from channel " << channel.get()); } void ActiveSubscriber::stopDelivery() { // if someone is starting delivery, we should not allow it to stop. // otherwise we would break order gurantee. since queued message would be // delivered to message handler when #startDelivery. { boost::lock_guard lock(deliverystate_lock); if (STARTING_DELIVERY == deliverystate) { LOG4CXX_ERROR(logger, "someone is starting delivery for " << *this << ". we could not stop delivery now."); throw StartingDeliveryException(); } } LOG4CXX_INFO(logger, *this << " #stopDelivery to stop receiving messages from channel " << channel.get()); // actual stop delivery doStopDelivery(); boost::lock_guard lock(queue_lock); this->handler = MessageHandlerCallbackPtr(); // marked the state to stopped setDeliveryState(STOPPED_DELIVERY); } void ActiveSubscriber::doStopDelivery() { // do nothing. } void ActiveSubscriber::consume(const MessageSeqId& messageSeqId) { PubSubDataPtr data = PubSubData::forConsumeRequest(channelManager->nextTxnId(), origData->getSubscriberId(), origData->getTopic(), messageSeqId); int retrywait = channelManager->getConfiguration() .getInt(Configuration::MESSAGE_CONSUME_RETRY_WAIT_TIME, DEFAULT_MESSAGE_CONSUME_RETRY_WAIT_TIME); OperationCallbackPtr writecb(new ConsumeWriteCallback(shared_from_this(), data, retrywait)); channel->writeRequest(data->getRequest(), writecb); } void ActiveSubscriber::handoverDelivery() { if (handler.get()) { TopicSubscriber ts(origData->getTopic(), origData->getSubscriberId()); // handover the message handler to other active subscriber channelManager->handoverDelivery(ts, origHandler, origFilter); } } void ActiveSubscriber::processEvent(const std::string &topic, const std::string &subscriberId, const SubscriptionEvent event) { if (!isResubscribeRequired()) { channelManager->getEventEmitter().emitSubscriptionEvent(topic, subscriberId, event); return; } // resumbit the subscribe request switch (event) { case TOPIC_MOVED: case SUBSCRIPTION_FORCED_CLOSED: resubscribe(); break; default: LOG4CXX_ERROR(logger, "Received unknown subscription event " << event << " for (topic:" << topic << ", subscriber:" << subscriberId << ")."); break; } } void ActiveSubscriber::resubscribe() { if (should_wait) { waitToResubscribe(); return; } should_wait = true; origData->clearTriedServers(); origData->setCallback(ResponseCallbackPtr(new ResubscribeCallback(shared_from_this()))); DuplexChannelPtr origChannel = boost::dynamic_pointer_cast(channel); origData->setOrigChannelForResubscribe(origChannel); // submit subscribe request again channelManager->submitOp(origData); } void ActiveSubscriber::waitToResubscribe() { int retrywait = channelManager->getConfiguration().getInt(Configuration::RECONNECT_SUBSCRIBE_RETRY_WAIT_TIME, DEFAULT_RECONNECT_SUBSCRIBE_RETRY_WAIT_TIME); retryTimer = RetryTimerPtr(new boost::asio::deadline_timer(channel->getService(), boost::posix_time::milliseconds(retrywait))); retryTimer->async_wait(boost::bind(&ActiveSubscriber::retryTimerComplete, shared_from_this(), boost::asio::placeholders::error)); } void ActiveSubscriber::retryTimerComplete(const boost::system::error_code& error) { if (error) { return; } should_wait = false; // resubscribe again resubscribe(); } void ActiveSubscriber::close() { // cancel reconnect timer RetryTimerPtr timer = retryTimer; if (timer.get()) { boost::system::error_code ec; timer->cancel(ec); if (ec) { LOG4CXX_WARN(logger, *this << " cancel resubscribe task " << timer.get() << " error :" << ec.message().c_str()); } } } SubscriberClientChannelHandler::SubscriberClientChannelHandler( const DuplexChannelManagerPtr& channelManager, ResponseHandlerMap& handlers) : HedwigClientChannelHandler(channelManager, handlers) { LOG4CXX_DEBUG(logger, "Creating SubscriberClientChannelHandler " << this); } SubscriberClientChannelHandler::~SubscriberClientChannelHandler() { LOG4CXX_DEBUG(logger, "Cleaning up SubscriberClientChannelHandler " << this); } void SubscriberClientChannelHandler::messageReceived(const DuplexChannelPtr& channel, const PubSubResponsePtr& m) { if (m->has_message()) { TopicSubscriber ts(m->topic(), m->subscriberid()); // dispatch the message to target topic subscriber. deliverMessage(ts, m); return; } if (m->has_responsebody()) { const ResponseBody& respBody = m->responsebody(); if (respBody.has_subscriptionevent()) { const SubscriptionEventResponse& eventResp = respBody.subscriptionevent(); // dispatch the event TopicSubscriber ts(m->topic(), m->subscriberid()); handleSubscriptionEvent(ts, eventResp.event()); return; } } HedwigClientChannelHandler::messageReceived(channel, m); } void SubscriberClientChannelHandler::doClose() { // clean the handler status closeHandler(); if (channel.get()) { // need to ensure the channel is removed from allchannels list // since it will be killed channelManager->removeChannel(channel); LOG4CXX_INFO(logger, "remove subscription channel " << channel.get() << "."); } } SubscriberImpl::SubscriberImpl(const DuplexChannelManagerPtr& channelManager) : channelManager(channelManager) { } SubscriberImpl::~SubscriberImpl() { LOG4CXX_DEBUG(logger, "deleting subscriber" << this); } void SubscriberImpl::subscribe(const std::string& topic, const std::string& subscriberId, const SubscribeRequest::CreateOrAttach mode) { SubscriptionOptions options; options.set_createorattach(mode); subscribe(topic, subscriberId, options); } void SubscriberImpl::subscribe(const std::string& topic, const std::string& subscriberId, const SubscriptionOptions& options) { SyncOperationCallback* cb = new SyncOperationCallback( channelManager->getConfiguration().getInt(Configuration::SYNC_REQUEST_TIMEOUT, DEFAULT_SYNC_REQUEST_TIMEOUT)); OperationCallbackPtr callback(cb); asyncSubscribe(topic, subscriberId, options, callback); cb->wait(); cb->throwExceptionIfNeeded(); } void SubscriberImpl::asyncSubscribe(const std::string& topic, const std::string& subscriberId, const SubscribeRequest::CreateOrAttach mode, const OperationCallbackPtr& callback) { SubscriptionOptions options; options.set_createorattach(mode); asyncSubscribe(topic, subscriberId, options, callback); } void SubscriberImpl::asyncSubscribe(const std::string& topic, const std::string& subscriberId, const SubscriptionOptions& options, const OperationCallbackPtr& callback) { SubscriptionOptions options2 = options; if (!options2.has_messagebound()) { int messageBound = channelManager->getConfiguration() .getInt(Configuration::SUBSCRIPTION_MESSAGE_BOUND, DEFAULT_SUBSCRIPTION_MESSAGE_BOUND); options2.set_messagebound(messageBound); } ResponseCallbackPtr respCallback(new ResponseCallbackAdaptor(callback)); PubSubDataPtr data = PubSubData::forSubscribeRequest(channelManager->nextTxnId(), subscriberId, topic, respCallback, options2); channelManager->submitOp(data); } void SubscriberImpl::unsubscribe(const std::string& topic, const std::string& subscriberId) { SyncOperationCallback* cb = new SyncOperationCallback( channelManager->getConfiguration().getInt(Configuration::SYNC_REQUEST_TIMEOUT, DEFAULT_SYNC_REQUEST_TIMEOUT)); OperationCallbackPtr callback(cb); asyncUnsubscribe(topic, subscriberId, callback); cb->wait(); cb->throwExceptionIfNeeded(); } void SubscriberImpl::asyncUnsubscribe(const std::string& topic, const std::string& subscriberId, const OperationCallbackPtr& callback) { OperationCallbackPtr closeCb(new CloseSubscriptionForUnsubscribeCallback(channelManager, topic, subscriberId, callback)); asyncCloseSubscription(topic, subscriberId, closeCb); } void SubscriberImpl::consume(const std::string& topic, const std::string& subscriberId, const MessageSeqId& messageSeqId) { TopicSubscriber t(topic, subscriberId); // Get the subscriber channel handler SubscriberClientChannelHandlerPtr handler = channelManager->getSubscriptionChannelHandler(t); if (handler.get() == 0) { LOG4CXX_ERROR(logger, "Cannot consume. No subscription channel handler found for topic (" << topic << ") subscriberId(" << subscriberId << ")."); return; } handler->consume(t, messageSeqId); } void SubscriberImpl::startDeliveryWithFilter(const std::string& topic, const std::string& subscriberId, const MessageHandlerCallbackPtr& callback, const ClientMessageFilterPtr& filter) { if (0 == filter.get()) { throw NullMessageFilterException(); } if (0 == callback.get()) { throw NullMessageHandlerException(); } TopicSubscriber t(topic, subscriberId); // Get the subscriber channel handler SubscriberClientChannelHandlerPtr handler = channelManager->getSubscriptionChannelHandler(t); if (handler.get() == 0) { LOG4CXX_ERROR(logger, "Trying to start deliver on a non existant handler topic = " << topic << ", subscriber = " << subscriberId); throw NotSubscribedException(); } handler->startDelivery(t, callback, filter); } void SubscriberImpl::startDelivery(const std::string& topic, const std::string& subscriberId, const MessageHandlerCallbackPtr& callback) { TopicSubscriber t(topic, subscriberId); // Get the subscriber channel handler SubscriberClientChannelHandlerPtr handler = channelManager->getSubscriptionChannelHandler(t); if (handler.get() == 0) { LOG4CXX_ERROR(logger, "Trying to start deliver on a non existant handler topic = " << topic << ", subscriber = " << subscriberId); throw NotSubscribedException(); } handler->startDelivery(t, callback, ClientMessageFilterPtr()); } void SubscriberImpl::stopDelivery(const std::string& topic, const std::string& subscriberId) { TopicSubscriber t(topic, subscriberId); // Get the subscriber channel handler SubscriberClientChannelHandlerPtr handler = channelManager->getSubscriptionChannelHandler(t); if (handler.get() == 0) { LOG4CXX_ERROR(logger, "Trying to stop deliver on a non existant handler topic = " << topic << ", subscriber = " << subscriberId); throw NotSubscribedException(); } handler->stopDelivery(t); } bool SubscriberImpl::hasSubscription(const std::string& topic, const std::string& subscriberId) { TopicSubscriber ts(topic, subscriberId); // Get the subscriber channel handler SubscriberClientChannelHandlerPtr handler = channelManager->getSubscriptionChannelHandler(ts); if (!handler.get()) { return false; } return handler->hasSubscription(ts); } void SubscriberImpl::closeSubscription(const std::string& topic, const std::string& subscriberId) { SyncOperationCallback* cb = new SyncOperationCallback( channelManager->getConfiguration().getInt(Configuration::SYNC_REQUEST_TIMEOUT, DEFAULT_SYNC_REQUEST_TIMEOUT)); OperationCallbackPtr callback(cb); asyncCloseSubscription(topic, subscriberId, callback); cb->wait(); cb->throwExceptionIfNeeded(); } void SubscriberImpl::asyncCloseSubscription(const std::string& topic, const std::string& subscriberId, const OperationCallbackPtr& callback) { LOG4CXX_INFO(logger, "closeSubscription (" << topic << ", " << subscriberId << ")"); TopicSubscriber t(topic, subscriberId); channelManager->asyncCloseSubscription(t, callback); } void SubscriberImpl::addSubscriptionListener(SubscriptionListenerPtr& listener) { channelManager->getEventEmitter().addSubscriptionListener(listener); } void SubscriberImpl::removeSubscriptionListener(SubscriptionListenerPtr& listener) { channelManager->getEventEmitter().removeSubscriptionListener(listener); } // // Unsubscribe Response Handler // UnsubscribeResponseHandler::UnsubscribeResponseHandler(const DuplexChannelManagerPtr& channelManager) : ResponseHandler(channelManager) {} void UnsubscribeResponseHandler::handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel) { switch (m->statuscode()) { case SUCCESS: if (m->has_responsebody()) { txn->getCallback()->operationComplete(m->responsebody()); } else { txn->getCallback()->operationComplete(ResponseBody()); } break; case SERVICE_DOWN: LOG4CXX_ERROR(logger, "Server responsed with SERVICE_DOWN for " << txn->getTxnId()); txn->getCallback()->operationFailed(ServiceDownException()); break; case CLIENT_ALREADY_SUBSCRIBED: case TOPIC_BUSY: txn->getCallback()->operationFailed(AlreadySubscribedException()); break; case CLIENT_NOT_SUBSCRIBED: txn->getCallback()->operationFailed(NotSubscribedException()); break; case NOT_RESPONSIBLE_FOR_TOPIC: redirectRequest(m, txn, channel); break; default: LOG4CXX_ERROR(logger, "Unexpected response " << m->statuscode() << " for " << txn->getTxnId()); txn->getCallback()->operationFailed(UnexpectedResponseException()); break; } } // // CloseSubscription Response Handler // CloseSubscriptionResponseHandler::CloseSubscriptionResponseHandler( const DuplexChannelManagerPtr& channelManager) : ResponseHandler(channelManager) {} void CloseSubscriptionResponseHandler::handleResponse( const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel) { switch (m->statuscode()) { case SUCCESS: if (m->has_responsebody()) { txn->getCallback()->operationComplete(m->responsebody()); } else { txn->getCallback()->operationComplete(ResponseBody()); } break; case SERVICE_DOWN: LOG4CXX_ERROR(logger, "Server responsed with SERVICE_DOWN for " << txn->getTxnId()); txn->getCallback()->operationFailed(ServiceDownException()); break; case CLIENT_ALREADY_SUBSCRIBED: case TOPIC_BUSY: txn->getCallback()->operationFailed(AlreadySubscribedException()); break; case CLIENT_NOT_SUBSCRIBED: txn->getCallback()->operationFailed(NotSubscribedException()); break; case NOT_RESPONSIBLE_FOR_TOPIC: redirectRequest(m, txn, channel); break; default: LOG4CXX_ERROR(logger, "Unexpected response " << m->statuscode() << " for " << txn->getTxnId()); txn->getCallback()->operationFailed(UnexpectedResponseException()); break; } } std::ostream& Hedwig::operator<<(std::ostream& os, const ActiveSubscriber& subscriber) { os << "ActiveSubscriber(" << &subscriber << ", topic:" << subscriber.getTopic() << ", subscriber:" << subscriber.getSubscriberId() << ")"; return os; } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/subscriberimpl.h000066400000000000000000000303231244507361200262560ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef SUBSCRIBE_IMPL_H #define SUBSCRIBE_IMPL_H #include #include #include "clientimpl.h" #include #ifdef USE_BOOST_TR1 #include #else #include #endif #include #include #include #include #include namespace Hedwig { class ActiveSubscriber; typedef boost::shared_ptr ActiveSubscriberPtr; class ConsumeWriteCallback : public OperationCallback { public: ConsumeWriteCallback(const ActiveSubscriberPtr& activeSubscriber, const PubSubDataPtr& data, int retrywait); virtual ~ConsumeWriteCallback(); void operationComplete(); void operationFailed(const std::exception& exception); static void timerComplete(const ActiveSubscriberPtr& activeSubscriber, const PubSubDataPtr& data, const boost::system::error_code& error); private: const ActiveSubscriberPtr activeSubscriber; const PubSubDataPtr data; int retrywait; }; class SubscriberClientChannelHandler; typedef boost::shared_ptr SubscriberClientChannelHandlerPtr; class SubscriberConsumeCallback : public OperationCallback { public: SubscriberConsumeCallback(const DuplexChannelManagerPtr& channelManager, const ActiveSubscriberPtr& activeSubscriber, const PubSubResponsePtr& m); void operationComplete(); void operationFailed(const std::exception& exception); static void timerComplete(const ActiveSubscriberPtr activeSubscriber, const PubSubResponsePtr m, const boost::system::error_code& error); private: const DuplexChannelManagerPtr channelManager; const ActiveSubscriberPtr activeSubscriber; const PubSubResponsePtr m; }; class CloseSubscriptionForUnsubscribeCallback : public OperationCallback { public: CloseSubscriptionForUnsubscribeCallback(const DuplexChannelManagerPtr& channelManager, const std::string& topic, const std::string& subscriberId, const OperationCallbackPtr& unsubCb); virtual void operationComplete(); virtual void operationFailed(const std::exception& exception); private: const DuplexChannelManagerPtr channelManager; const std::string topic; const std::string subscriberId; const OperationCallbackPtr unsubCb; }; // A instance handle all actions belongs to a subscription class ActiveSubscriber : public boost::enable_shared_from_this { public: ActiveSubscriber(const PubSubDataPtr& data, const AbstractDuplexChannelPtr& channel, const SubscriptionPreferencesPtr& preferences, const DuplexChannelManagerPtr& channelManager); virtual ~ActiveSubscriber() {} // Get the topic const std::string& getTopic() const; // Get the subscriber id const std::string& getSubscriberId() const; inline MessageHandlerCallbackPtr getMessageHandler() const { return handler; } inline const AbstractDuplexChannelPtr& getChannel() const { return channel; } // Deliver a received message void deliverMessage(const PubSubResponsePtr& m); // // Start Delivery. If filter is null, just start delivery w/o filter // otherwise start delivery with the given filter. // void startDelivery(const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter); // Stop Delivery virtual void stopDelivery(); // Consume message void consume(const MessageSeqId& messageSeqId); // Process Event received from subscription channel void processEvent(const std::string &topic, const std::string &subscriberId, const SubscriptionEvent event); // handover message delivery to other subscriber void handoverDelivery(); // Is resubscribe required inline bool isResubscribeRequired() { return origData->getSubscriptionOptions().enableresubscribe(); } // Resubscribe the subscriber void resubscribe(); // Close the ActiveSubscriber void close(); friend std::ostream& operator<<(std::ostream& os, const ActiveSubscriber& subscriber); protected: // Wait to resubscribe void waitToResubscribe(); void retryTimerComplete(const boost::system::error_code& error); // Start Delivery with a message filter virtual void doStartDelivery(const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter); // Stop Delivery virtual void doStopDelivery(); // Queue message when message handler is not ready virtual void queueMessage(const PubSubResponsePtr& m); AbstractDuplexChannelPtr channel; boost::shared_mutex queue_lock; std::deque queue; private: enum DeliveryState { STARTING_DELIVERY, STARTED_DELIVERY, STOPPED_DELIVERY, }; inline void setDeliveryState(DeliveryState state) { { boost::lock_guard lock(deliverystate_lock); deliverystate = state; } } boost::shared_mutex deliverystate_lock; DeliveryState deliverystate; // Keep original handler and filter to handover when resubscribed MessageHandlerCallbackPtr origHandler; ClientMessageFilterPtr origFilter; MessageHandlerCallbackPtr handler; const PubSubDataPtr origData; const SubscriptionPreferencesPtr preferences; DuplexChannelManagerPtr channelManager; // variables used for resubscribe bool should_wait; typedef boost::shared_ptr RetryTimerPtr; RetryTimerPtr retryTimer; }; class ResubscribeCallback : public ResponseCallback { public: explicit ResubscribeCallback(const ActiveSubscriberPtr& activeSubscriber); virtual void operationComplete(const ResponseBody & resp); virtual void operationFailed(const std::exception& exception); private: const ActiveSubscriberPtr activeSubscriber; }; class SubscriberClientChannelHandler : public HedwigClientChannelHandler, public boost::enable_shared_from_this { public: SubscriberClientChannelHandler(const DuplexChannelManagerPtr& channelManager, ResponseHandlerMap& handlers); virtual ~SubscriberClientChannelHandler(); virtual void handleSubscriptionEvent(const TopicSubscriber& ts, const SubscriptionEvent event) = 0; // Deliver a received message to given message handler virtual void deliverMessage(const TopicSubscriber& ts, const PubSubResponsePtr& m) = 0; // // Start Delivery for a given topic subscriber. If the filter is null, // start delivery w/o filtering; otherwise start delivery with the // given message filter. // virtual void startDelivery(const TopicSubscriber& ts, const MessageHandlerCallbackPtr& handler, const ClientMessageFilterPtr& filter) = 0; // Stop Delivery for a given topic subscriber virtual void stopDelivery(const TopicSubscriber& ts) = 0; // Has Subscription on the Channel virtual bool hasSubscription(const TopicSubscriber& ts) = 0; // Close Subscription for a given topic subscriber virtual void asyncCloseSubscription(const TopicSubscriber& ts, const OperationCallbackPtr& callback) = 0; // Consume message for a given topic subscriber virtual void consume(const TopicSubscriber& ts, const MessageSeqId& messageSeqId) = 0; // Message received from the underlying channel virtual void messageReceived(const DuplexChannelPtr& channel, const PubSubResponsePtr& m); // Bind the underlying channel to the subscription channel handler inline void setChannel(const AbstractDuplexChannelPtr& channel) { this->channel = channel; } // Return the underlying channel inline const AbstractDuplexChannelPtr& getChannel() const { return channel; } protected: // close logic for subscription channel handler virtual void doClose(); // Clean the handler status virtual void closeHandler() = 0; AbstractDuplexChannelPtr channel; }; class SubscriberImpl : public Subscriber { public: SubscriberImpl(const DuplexChannelManagerPtr& channelManager); ~SubscriberImpl(); void subscribe(const std::string& topic, const std::string& subscriberId, const SubscribeRequest::CreateOrAttach mode); void asyncSubscribe(const std::string& topic, const std::string& subscriberId, const SubscribeRequest::CreateOrAttach mode, const OperationCallbackPtr& callback); void subscribe(const std::string& topic, const std::string& subscriberId, const SubscriptionOptions& options); void asyncSubscribe(const std::string& topic, const std::string& subscriberId, const SubscriptionOptions& options, const OperationCallbackPtr& callback); void unsubscribe(const std::string& topic, const std::string& subscriberId); void asyncUnsubscribe(const std::string& topic, const std::string& subscriberId, const OperationCallbackPtr& callback); void consume(const std::string& topic, const std::string& subscriberId, const MessageSeqId& messageSeqId); void startDelivery(const std::string& topic, const std::string& subscriberId, const MessageHandlerCallbackPtr& callback); void startDeliveryWithFilter(const std::string& topic, const std::string& subscriberId, const MessageHandlerCallbackPtr& callback, const ClientMessageFilterPtr& filter); void stopDelivery(const std::string& topic, const std::string& subscriberId); bool hasSubscription(const std::string& topic, const std::string& subscriberId); void closeSubscription(const std::string& topic, const std::string& subscriberId); void asyncCloseSubscription(const std::string& topic, const std::string& subscriberId, const OperationCallbackPtr& callback); virtual void addSubscriptionListener(SubscriptionListenerPtr& listener); virtual void removeSubscriptionListener(SubscriptionListenerPtr& listener); private: const DuplexChannelManagerPtr channelManager; }; // Unsubscribe Response Handler class UnsubscribeResponseHandler : public ResponseHandler { public: explicit UnsubscribeResponseHandler(const DuplexChannelManagerPtr& channelManager); virtual ~UnsubscribeResponseHandler() {} virtual void handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel); }; // CloseSubscription Response Handler class CloseSubscriptionResponseHandler : public ResponseHandler { public: explicit CloseSubscriptionResponseHandler(const DuplexChannelManagerPtr& channelManager); virtual ~CloseSubscriptionResponseHandler() {} virtual void handleResponse(const PubSubResponsePtr& m, const PubSubDataPtr& txn, const DuplexChannelPtr& channel); }; }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/util.cpp000066400000000000000000000111461244507361200245430ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include #include #include "util.h" #include "channel.h" #include #include #include static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); using namespace Hedwig; #define MAX_HOSTNAME_LENGTH 256 const std::string UNITIALISED_HOST("UNINITIALISED HOST"); const int DEFAULT_PORT = 4080; const int DEFAULT_SSL_PORT = 9876; HostAddress::HostAddress() : initialised(false), address_str(), ssl_host_port(0) { } HostAddress::~HostAddress() { } bool HostAddress::isNullHost() const { return !initialised; } bool HostAddress::operator==(const HostAddress& other) const { return (other.ip() == ip() && other.port() == port()); } const std::string& HostAddress::getAddressString() const { if (!isNullHost()) { return address_str; } else { return UNITIALISED_HOST; } } uint32_t HostAddress::ip() const { return host_ip; } void HostAddress::updateIP(uint32_t ip) { this->host_ip = ip; } uint16_t HostAddress::port() const { return host_port; } uint16_t HostAddress::sslPort() const { return ssl_host_port; } void HostAddress::parse_string() { char* url = strdup(address_str.c_str()); LOG4CXX_DEBUG(logger, "Parse address : " << url); if (url == NULL) { LOG4CXX_ERROR(logger, "You seems to be out of memory"); throw OomException(); } int port = DEFAULT_PORT; int sslport = DEFAULT_SSL_PORT; char *colon = strchr(url, ':'); if (colon) { *colon = 0; colon++; char* sslcolon = strchr(colon, ':'); if (sslcolon) { *sslcolon = 0; sslcolon++; sslport = strtol(sslcolon, NULL, 10); if (sslport == 0) { LOG4CXX_ERROR(logger, "Invalid SSL port given: [" << sslcolon << "]"); free((void*)url); throw InvalidPortException(); } } port = strtol(colon, NULL, 10); if (port == 0) { LOG4CXX_ERROR(logger, "Invalid port given: [" << colon << "]"); free((void*)url); throw InvalidPortException(); } } int err = 0; struct addrinfo *addr; struct addrinfo hints; memset(&hints, 0, sizeof(struct addrinfo)); hints.ai_family = AF_INET; err = getaddrinfo(url, NULL, &hints, &addr); if (err != 0) { LOG4CXX_ERROR(logger, "Couldn't resolve host [" << url << "]:" << hstrerror(err)); free((void*)url); throw HostResolutionException(); } sockaddr_in* sa_ptr = (sockaddr_in*)addr->ai_addr; struct sockaddr_in socket_addr; memset(&socket_addr, 0, sizeof(struct sockaddr_in)); socket_addr = *sa_ptr; socket_addr.sin_port = htons(port); //socket_addr.sin_family = AF_INET; host_ip = ntohl(socket_addr.sin_addr.s_addr); host_port = ntohs(socket_addr.sin_port); ssl_host_port = sslport; freeaddrinfo(addr); free((void*)url); } HostAddress HostAddress::fromString(std::string str) { HostAddress h; h.address_str = str; h.parse_string(); h.initialised = true; return h; } ResponseCallbackAdaptor::ResponseCallbackAdaptor(const OperationCallbackPtr& opCallbackPtr) : opCallbackPtr(opCallbackPtr) { } void ResponseCallbackAdaptor::operationComplete(const ResponseBody& response) { opCallbackPtr->operationComplete(); } void ResponseCallbackAdaptor::operationFailed(const std::exception& exception) { opCallbackPtr->operationFailed(exception); } // Help Function std::ostream& Hedwig::operator<<(std::ostream& os, const HostAddress& host) { if (host.isNullHost()) { os << "(host:null)"; } else { os << "(host:" << host.getAddressString() << ", ip=" << host.ip() << ", port=" << host.port() << ", ssl_port=" << host.sslPort() << ")"; } return os; } std::ostream& std::operator<<(std::ostream& os, const TopicSubscriber& ts) { os << "(topic:" << ts.first << ", subscriber:" << ts.second << ")"; return os; } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/lib/util.h000066400000000000000000000072731244507361200242160ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef HEDWIG_UTIL_H #define HEDWIG_UTIL_H #include #include #include #include #include #include #include #ifdef USE_BOOST_TR1 #include #else #include #endif #include #include namespace Hedwig { typedef std::pair TopicSubscriber; /** Representation of a hosts address */ class HostAddress { public: HostAddress(); ~HostAddress(); bool operator==(const HostAddress& other) const; bool isNullHost() const; const std::string& getAddressString() const; uint32_t ip() const; uint16_t port() const; uint16_t sslPort() const; // the real ip address is different from default server // if default server is a VIP void updateIP(uint32_t ip); static HostAddress fromString(std::string host); friend std::ostream& operator<<(std::ostream& os, const HostAddress& host); private: void parse_string(); bool initialised; std::string address_str; uint32_t host_ip; uint16_t host_port; uint16_t ssl_host_port; }; /** * An adaptor for OperationCallback */ class ResponseCallbackAdaptor : public Callback { public: ResponseCallbackAdaptor(const OperationCallbackPtr& opCallbackPtr); virtual void operationComplete(const ResponseBody& response); virtual void operationFailed(const std::exception& exception); private: OperationCallbackPtr opCallbackPtr; }; /** Hash a host address. Takes the least significant 16-bits of the address and the 16-bits of the port and packs them into one 32-bit number. While collisons are theoretically very possible, they shouldn't happen as the hedwig servers should be in the same subnet. */ struct HostAddressHash : public std::unary_function { size_t operator()(const Hedwig::HostAddress& address) const { return (address.ip() << 16) & (address.port()); } }; /** Hash a channel pointer, just returns the pointer. */ struct TopicSubscriberHash : public std::unary_function { size_t operator()(const Hedwig::TopicSubscriber& topicsub) const { std::string fullstr = topicsub.first + topicsub.second; return std::tr1::hash()(fullstr); } }; /** * Operation Type Hash */ struct OperationTypeHash : public std::unary_function { size_t operator()(const Hedwig::OperationType& type) const { return type; } }; }; // Since TopicSubscriber is an typedef of std::pair. so log4cxx would lookup 'operator<<' // in std namespace. namespace std { // Help Function to print topicSubscriber std::ostream& operator<<(std::ostream& os, const Hedwig::TopicSubscriber& ts); }; #endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/m4/000077500000000000000000000000001244507361200226315ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/m4/ax_boost_asio.m4000066400000000000000000000073241244507361200257320ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_boost_asio.html # =========================================================================== # # SYNOPSIS # # AX_BOOST_ASIO # # DESCRIPTION # # Test for Asio library from the Boost C++ libraries. The macro requires a # preceding call to AX_BOOST_BASE. Further documentation is available at # . # # This macro calls: # # AC_SUBST(BOOST_ASIO_LIB) # # And sets: # # HAVE_BOOST_ASIO # # LICENSE # # Copyright (c) 2008 Thomas Porschberg # Copyright (c) 2008 Pete Greenwell # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 9 AC_DEFUN([AX_BOOST_ASIO], [ AC_ARG_WITH([boost-asio], AS_HELP_STRING([--with-boost-asio@<:@=special-lib@:>@], [use the ASIO library from boost - it is possible to specify a certain library for the linker e.g. --with-boost-asio=boost_system-gcc41-mt-1_34 ]), [ if test "$withval" = "no"; then want_boost="no" elif test "$withval" = "yes"; then want_boost="yes" ax_boost_user_asio_lib="" else want_boost="yes" ax_boost_user_asio_lib="$withval" fi ], [want_boost="yes"] ) if test "x$want_boost" = "xyes"; then AC_REQUIRE([AC_PROG_CC]) CPPFLAGS_SAVED="$CPPFLAGS" CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS" export CPPFLAGS LDFLAGS_SAVED="$LDFLAGS" LDFLAGS="$LDFLAGS $BOOST_LDFLAGS" export LDFLAGS AC_CACHE_CHECK(whether the Boost::ASIO library is available, ax_cv_boost_asio, [AC_LANG_PUSH([C++]) AC_COMPILE_IFELSE(AC_LANG_PROGRAM([[ @%:@include ]], [[ boost::asio::io_service io; boost::system::error_code timer_result; boost::asio::deadline_timer t(io); t.cancel(); io.run_one(); return 0; ]]), ax_cv_boost_asio=yes, ax_cv_boost_asio=no) AC_LANG_POP([C++]) ]) if test "x$ax_cv_boost_asio" = "xyes"; then AC_DEFINE(HAVE_BOOST_ASIO,,[define if the Boost::ASIO library is available]) BN=boost_system if test "x$ax_boost_user_asio_lib" = "x"; then for ax_lib in $BN $BN-$CC $BN-$CC-mt $BN-$CC-mt-s $BN-$CC-s \ lib$BN lib$BN-$CC lib$BN-$CC-mt lib$BN-$CC-mt-s lib$BN-$CC-s \ $BN-mgw $BN-mgw $BN-mgw-mt $BN-mgw-mt-s $BN-mgw-s ; do AC_CHECK_LIB($ax_lib, main, [BOOST_ASIO_LIB="-l$ax_lib" AC_SUBST(BOOST_ASIO_LIB) link_thread="yes" break], [link_thread="no"]) done else for ax_lib in $ax_boost_user_asio_lib $BN-$ax_boost_user_asio_lib; do AC_CHECK_LIB($ax_lib, main, [BOOST_ASIO_LIB="-l$ax_lib" AC_SUBST(BOOST_ASIO_LIB) link_asio="yes" break], [link_asio="no"]) done fi if test "x$ax_lib" = "x"; then AC_MSG_ERROR(Could not find a version of the library!) fi if test "x$link_asio" = "xno"; then AC_MSG_ERROR(Could not link against $ax_lib !) fi fi CPPFLAGS="$CPPFLAGS_SAVED" LDFLAGS="$LDFLAGS_SAVED" fi ]) bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/m4/ax_boost_base.m4000066400000000000000000000235121244507361200257060ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_boost_base.html # =========================================================================== # # SYNOPSIS # # AX_BOOST_BASE([MINIMUM-VERSION], [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND]) # # DESCRIPTION # # Test for the Boost C++ libraries of a particular version (or newer) # # If no path to the installed boost library is given the macro searchs # under /usr, /usr/local, /opt and /opt/local and evaluates the # $BOOST_ROOT environment variable. Further documentation is available at # . # # This macro calls: # # AC_SUBST(BOOST_CPPFLAGS) / AC_SUBST(BOOST_LDFLAGS) # # And sets: # # HAVE_BOOST # # LICENSE # # Copyright (c) 2008 Thomas Porschberg # Copyright (c) 2009 Peter Adolphs # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 17 AC_DEFUN([AX_BOOST_BASE], [ AC_ARG_WITH([boost], [AS_HELP_STRING([--with-boost@<:@=ARG@:>@], [use Boost library from a standard location (ARG=yes), from the specified location (ARG=), or disable it (ARG=no) @<:@ARG=yes@:>@ ])], [ if test "$withval" = "no"; then want_boost="no" elif test "$withval" = "yes"; then want_boost="yes" ac_boost_path="" else want_boost="yes" ac_boost_path="$withval" fi ], [want_boost="yes"]) AC_ARG_WITH([boost-libdir], AS_HELP_STRING([--with-boost-libdir=LIB_DIR], [Force given directory for boost libraries. Note that this will overwrite library path detection, so use this parameter only if default library detection fails and you know exactly where your boost libraries are located.]), [ if test -d "$withval" then ac_boost_lib_path="$withval" else AC_MSG_ERROR(--with-boost-libdir expected directory name) fi ], [ac_boost_lib_path=""] ) if test "x$want_boost" = "xyes"; then boost_lib_version_req=ifelse([$1], ,1.20.0,$1) boost_lib_version_req_shorten=`expr $boost_lib_version_req : '\([[0-9]]*\.[[0-9]]*\)'` boost_lib_version_req_major=`expr $boost_lib_version_req : '\([[0-9]]*\)'` boost_lib_version_req_minor=`expr $boost_lib_version_req : '[[0-9]]*\.\([[0-9]]*\)'` boost_lib_version_req_sub_minor=`expr $boost_lib_version_req : '[[0-9]]*\.[[0-9]]*\.\([[0-9]]*\)'` if test "x$boost_lib_version_req_sub_minor" = "x" ; then boost_lib_version_req_sub_minor="0" fi WANT_BOOST_VERSION=`expr $boost_lib_version_req_major \* 100000 \+ $boost_lib_version_req_minor \* 100 \+ $boost_lib_version_req_sub_minor` AC_MSG_CHECKING(for boostlib >= $boost_lib_version_req) succeeded=no dnl On x86_64 systems check for system libraries in both lib64 and lib. dnl The former is specified by FHS, but e.g. Debian does not adhere to dnl this (as it rises problems for generic multi-arch support). dnl The last entry in the list is chosen by default when no libraries dnl are found, e.g. when only header-only libraries are installed! libsubdirs="lib" if test `uname -m` = x86_64; then libsubdirs="lib64 lib lib64" fi dnl first we check the system location for boost libraries dnl this location ist chosen if boost libraries are installed with the --layout=system option dnl or if you install boost with RPM if test "$ac_boost_path" != ""; then BOOST_LDFLAGS="-L$ac_boost_path/$libsubdir" BOOST_CPPFLAGS="-I$ac_boost_path/include" elif test "$cross_compiling" != yes; then for ac_boost_path_tmp in /usr /usr/local /opt /opt/local ; do if test -d "$ac_boost_path_tmp/include/boost" && test -r "$ac_boost_path_tmp/include/boost"; then for libsubdir in $libsubdirs ; do if ls "$ac_boost_path_tmp/$libsubdir/libboost_"* >/dev/null 2>&1 ; then break; fi done BOOST_LDFLAGS="-L$ac_boost_path_tmp/$libsubdir" BOOST_CPPFLAGS="-I$ac_boost_path_tmp/include" break; fi done fi dnl overwrite ld flags if we have required special directory with dnl --with-boost-libdir parameter if test "$ac_boost_lib_path" != ""; then BOOST_LDFLAGS="-L$ac_boost_lib_path" fi CPPFLAGS_SAVED="$CPPFLAGS" CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS" export CPPFLAGS LDFLAGS_SAVED="$LDFLAGS" LDFLAGS="$LDFLAGS $BOOST_LDFLAGS" export LDFLAGS AC_REQUIRE([AC_PROG_CXX]) AC_LANG_PUSH(C++) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[ @%:@include ]], [[ #if BOOST_VERSION >= $WANT_BOOST_VERSION // Everything is okay #else # error Boost version is too old #endif ]])],[ AC_MSG_RESULT(yes) succeeded=yes found_system=yes ],[ ]) AC_LANG_POP([C++]) dnl if we found no boost with system layout we search for boost libraries dnl built and installed without the --layout=system option or for a staged(not installed) version if test "x$succeeded" != "xyes"; then _version=0 if test "$ac_boost_path" != ""; then if test -d "$ac_boost_path" && test -r "$ac_boost_path"; then for i in `ls -d $ac_boost_path/include/boost-* 2>/dev/null`; do _version_tmp=`echo $i | sed "s#$ac_boost_path##" | sed 's/\/include\/boost-//' | sed 's/_/./'` V_CHECK=`expr $_version_tmp \> $_version` if test "$V_CHECK" = "1" ; then _version=$_version_tmp fi VERSION_UNDERSCORE=`echo $_version | sed 's/\./_/'` BOOST_CPPFLAGS="-I$ac_boost_path/include/boost-$VERSION_UNDERSCORE" done fi else if test "$cross_compiling" != yes; then for ac_boost_path in /usr /usr/local /opt /opt/local ; do if test -d "$ac_boost_path" && test -r "$ac_boost_path"; then for i in `ls -d $ac_boost_path/include/boost-* 2>/dev/null`; do _version_tmp=`echo $i | sed "s#$ac_boost_path##" | sed 's/\/include\/boost-//' | sed 's/_/./'` V_CHECK=`expr $_version_tmp \> $_version` if test "$V_CHECK" = "1" ; then _version=$_version_tmp best_path=$ac_boost_path fi done fi done VERSION_UNDERSCORE=`echo $_version | sed 's/\./_/'` BOOST_CPPFLAGS="-I$best_path/include/boost-$VERSION_UNDERSCORE" if test "$ac_boost_lib_path" = ""; then for libsubdir in $libsubdirs ; do if ls "$best_path/$libsubdir/libboost_"* >/dev/null 2>&1 ; then break; fi done BOOST_LDFLAGS="-L$best_path/$libsubdir" fi fi if test "x$BOOST_ROOT" != "x"; then for libsubdir in $libsubdirs ; do if ls "$BOOST_ROOT/stage/$libsubdir/libboost_"* >/dev/null 2>&1 ; then break; fi done if test -d "$BOOST_ROOT" && test -r "$BOOST_ROOT" && test -d "$BOOST_ROOT/stage/$libsubdir" && test -r "$BOOST_ROOT/stage/$libsubdir"; then version_dir=`expr //$BOOST_ROOT : '.*/\(.*\)'` stage_version=`echo $version_dir | sed 's/boost_//' | sed 's/_/./g'` stage_version_shorten=`expr $stage_version : '\([[0-9]]*\.[[0-9]]*\)'` V_CHECK=`expr $stage_version_shorten \>\= $_version` if test "$V_CHECK" = "1" -a "$ac_boost_lib_path" = "" ; then AC_MSG_NOTICE(We will use a staged boost library from $BOOST_ROOT) BOOST_CPPFLAGS="-I$BOOST_ROOT" BOOST_LDFLAGS="-L$BOOST_ROOT/stage/$libsubdir" fi fi fi fi CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS" export CPPFLAGS LDFLAGS="$LDFLAGS $BOOST_LDFLAGS" export LDFLAGS AC_LANG_PUSH(C++) AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[ @%:@include ]], [[ #if BOOST_VERSION >= $WANT_BOOST_VERSION // Everything is okay #else # error Boost version is too old #endif ]])],[ AC_MSG_RESULT(yes) succeeded=yes found_system=yes ],[ ]) AC_LANG_POP([C++]) fi if test "$succeeded" != "yes" ; then if test "$_version" = "0" ; then AC_MSG_NOTICE([[We could not detect the boost libraries (version $boost_lib_version_req_shorten or higher). If you have a staged boost library (still not installed) please specify \$BOOST_ROOT in your environment and do not give a PATH to --with-boost option. If you are sure you have boost installed, then check your version number looking in . See http://randspringer.de/boost for more documentation.]]) else AC_MSG_NOTICE([Your boost libraries seems to old (version $_version).]) fi # execute ACTION-IF-NOT-FOUND (if present): ifelse([$3], , :, [$3]) else AC_SUBST(BOOST_CPPFLAGS) AC_SUBST(BOOST_LDFLAGS) AC_DEFINE(HAVE_BOOST,,[define if the Boost library is available]) # execute ACTION-IF-FOUND (if present): ifelse([$2], , :, [$2]) fi CPPFLAGS="$CPPFLAGS_SAVED" LDFLAGS="$LDFLAGS_SAVED" fi ]) bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/m4/ax_boost_thread.m4000066400000000000000000000124521244507361200262440ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_boost_thread.html # =========================================================================== # # SYNOPSIS # # AX_BOOST_THREAD # # DESCRIPTION # # Test for Thread library from the Boost C++ libraries. The macro requires # a preceding call to AX_BOOST_BASE. Further documentation is available at # . # # This macro calls: # # AC_SUBST(BOOST_THREAD_LIB) # # And sets: # # HAVE_BOOST_THREAD # # LICENSE # # Copyright (c) 2009 Thomas Porschberg # Copyright (c) 2009 Michael Tindal # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 17 AC_DEFUN([AX_BOOST_THREAD], [ AC_ARG_WITH([boost-thread], AS_HELP_STRING([--with-boost-thread@<:@=special-lib@:>@], [use the Thread library from boost - it is possible to specify a certain library for the linker e.g. --with-boost-thread=boost_thread-gcc-mt ]), [ if test "$withval" = "no"; then want_boost="no" elif test "$withval" = "yes"; then want_boost="yes" ax_boost_user_thread_lib="" else want_boost="yes" ax_boost_user_thread_lib="$withval" fi ], [want_boost="yes"] ) if test "x$want_boost" = "xyes"; then AC_REQUIRE([AC_PROG_CC]) AC_REQUIRE([AC_CANONICAL_BUILD]) CPPFLAGS_SAVED="$CPPFLAGS" CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS" export CPPFLAGS LDFLAGS_SAVED="$LDFLAGS" LDFLAGS="$LDFLAGS $BOOST_LDFLAGS" export LDFLAGS AC_CACHE_CHECK(whether the Boost::Thread library is available, ax_cv_boost_thread, [AC_LANG_PUSH([C++]) CXXFLAGS_SAVE=$CXXFLAGS if test "x$build_os" = "xsolaris" ; then CXXFLAGS="-pthreads $CXXFLAGS" elif test "x$build_os" = "xming32" ; then CXXFLAGS="-mthreads $CXXFLAGS" else CXXFLAGS="-pthread $CXXFLAGS" fi AC_COMPILE_IFELSE(AC_LANG_PROGRAM([[@%:@include ]], [[boost::thread_group thrds; return 0;]]), ax_cv_boost_thread=yes, ax_cv_boost_thread=no) CXXFLAGS=$CXXFLAGS_SAVE AC_LANG_POP([C++]) ]) if test "x$ax_cv_boost_thread" = "xyes"; then if test "x$build_os" = "xsolaris" ; then BOOST_CPPFLAGS="-pthreads $BOOST_CPPFLAGS" elif test "x$build_os" = "xming32" ; then BOOST_CPPFLAGS="-mthreads $BOOST_CPPFLAGS" else BOOST_CPPFLAGS="-pthread $BOOST_CPPFLAGS" fi AC_SUBST(BOOST_CPPFLAGS) AC_DEFINE(HAVE_BOOST_THREAD,,[define if the Boost::Thread library is available]) BOOSTLIBDIR=`echo $BOOST_LDFLAGS | sed -e 's/@<:@^\/@:>@*//'` LDFLAGS_SAVE=$LDFLAGS case "x$build_os" in *bsd* ) LDFLAGS="-pthread $LDFLAGS" break; ;; esac if test "x$ax_boost_user_thread_lib" = "x"; then for libextension in `ls $BOOSTLIBDIR/libboost_thread*.so* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^lib\(boost_thread.*\)\.so.*$;\1;'` `ls $BOOSTLIBDIR/libboost_thread*.a* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^lib\(boost_thread.*\)\.a*$;\1;'`; do ax_lib=${libextension} AC_CHECK_LIB($ax_lib, exit, [BOOST_THREAD_LIB="-l$ax_lib"; AC_SUBST(BOOST_THREAD_LIB) link_thread="yes"; break], [link_thread="no"]) done if test "x$link_thread" != "xyes"; then for libextension in `ls $BOOSTLIBDIR/boost_thread*.dll* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^\(boost_thread.*\)\.dll.*$;\1;'` `ls $BOOSTLIBDIR/boost_thread*.a* 2>/dev/null | sed 's,.*/,,' | sed -e 's;^\(boost_thread.*\)\.a*$;\1;'` ; do ax_lib=${libextension} AC_CHECK_LIB($ax_lib, exit, [BOOST_THREAD_LIB="-l$ax_lib"; AC_SUBST(BOOST_THREAD_LIB) link_thread="yes"; break], [link_thread="no"]) done fi else for ax_lib in $ax_boost_user_thread_lib boost_thread-$ax_boost_user_thread_lib; do AC_CHECK_LIB($ax_lib, exit, [BOOST_THREAD_LIB="-l$ax_lib"; AC_SUBST(BOOST_THREAD_LIB) link_thread="yes"; break], [link_thread="no"]) done fi if test "x$ax_lib" = "x"; then AC_MSG_ERROR(Could not find a version of the library!) fi if test "x$link_thread" = "xno"; then AC_MSG_ERROR(Could not link against $ax_lib !) else case "x$build_os" in *bsd* ) BOOST_LDFLAGS="-pthread $BOOST_LDFLAGS" break; ;; esac fi fi CPPFLAGS="$CPPFLAGS_SAVED" LDFLAGS="$LDFLAGS_SAVED" fi ]) bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/m4/ax_doxygen.m4000066400000000000000000000426521244507361200252510ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_prog_doxygen.html # =========================================================================== # # SYNOPSIS # # DX_INIT_DOXYGEN(PROJECT-NAME, DOXYFILE-PATH, [OUTPUT-DIR]) # DX_DOXYGEN_FEATURE(ON|OFF) # DX_DOT_FEATURE(ON|OFF) # DX_HTML_FEATURE(ON|OFF) # DX_CHM_FEATURE(ON|OFF) # DX_CHI_FEATURE(ON|OFF) # DX_MAN_FEATURE(ON|OFF) # DX_RTF_FEATURE(ON|OFF) # DX_XML_FEATURE(ON|OFF) # DX_PDF_FEATURE(ON|OFF) # DX_PS_FEATURE(ON|OFF) # # DESCRIPTION # # The DX_*_FEATURE macros control the default setting for the given # Doxygen feature. Supported features are 'DOXYGEN' itself, 'DOT' for # generating graphics, 'HTML' for plain HTML, 'CHM' for compressed HTML # help (for MS users), 'CHI' for generating a seperate .chi file by the # .chm file, and 'MAN', 'RTF', 'XML', 'PDF' and 'PS' for the appropriate # output formats. The environment variable DOXYGEN_PAPER_SIZE may be # specified to override the default 'a4wide' paper size. # # By default, HTML, PDF and PS documentation is generated as this seems to # be the most popular and portable combination. MAN pages created by # Doxygen are usually problematic, though by picking an appropriate subset # and doing some massaging they might be better than nothing. CHM and RTF # are specific for MS (note that you can't generate both HTML and CHM at # the same time). The XML is rather useless unless you apply specialized # post-processing to it. # # The macros mainly control the default state of the feature. The use can # override the default by specifying --enable or --disable. The macros # ensure that contradictory flags are not given (e.g., # --enable-doxygen-html and --enable-doxygen-chm, # --enable-doxygen-anything with --disable-doxygen, etc.) Finally, each # feature will be automatically disabled (with a warning) if the required # programs are missing. # # Once all the feature defaults have been specified, call DX_INIT_DOXYGEN # with the following parameters: a one-word name for the project for use # as a filename base etc., an optional configuration file name (the # default is 'Doxyfile', the same as Doxygen's default), and an optional # output directory name (the default is 'doxygen-doc'). # # Automake Support # # The following is a template aminclude.am file for use with Automake. # Make targets and variables values are controlled by the various # DX_COND_* conditionals set by autoconf. # # The provided targets are: # # doxygen-doc: Generate all doxygen documentation. # # doxygen-run: Run doxygen, which will generate some of the # documentation (HTML, CHM, CHI, MAN, RTF, XML) # but will not do the post processing required # for the rest of it (PS, PDF, and some MAN). # # doxygen-man: Rename some doxygen generated man pages. # # doxygen-ps: Generate doxygen PostScript documentation. # # doxygen-pdf: Generate doxygen PDF documentation. # # Note that by default these are not integrated into the automake targets. # If doxygen is used to generate man pages, you can achieve this # integration by setting man3_MANS to the list of man pages generated and # then adding the dependency: # # $(man3_MANS): doxygen-doc # # This will cause make to run doxygen and generate all the documentation. # # The following variable is intended for use in Makefile.am: # # DX_CLEANFILES = everything to clean. # # Then add this variable to MOSTLYCLEANFILES. # # ----- begin aminclude.am ------------------------------------- # # ## --------------------------------- ## # ## Format-independent Doxygen rules. ## # ## --------------------------------- ## # # if DX_COND_doc # # ## ------------------------------- ## # ## Rules specific for HTML output. ## # ## ------------------------------- ## # # if DX_COND_html # # DX_CLEAN_HTML = @DX_DOCDIR@/html # # endif DX_COND_html # # ## ------------------------------ ## # ## Rules specific for CHM output. ## # ## ------------------------------ ## # # if DX_COND_chm # # DX_CLEAN_CHM = @DX_DOCDIR@/chm # # if DX_COND_chi # # DX_CLEAN_CHI = @DX_DOCDIR@/@PACKAGE@.chi # # endif DX_COND_chi # # endif DX_COND_chm # # ## ------------------------------ ## # ## Rules specific for MAN output. ## # ## ------------------------------ ## # # if DX_COND_man # # DX_CLEAN_MAN = @DX_DOCDIR@/man # # endif DX_COND_man # # ## ------------------------------ ## # ## Rules specific for RTF output. ## # ## ------------------------------ ## # # if DX_COND_rtf # # DX_CLEAN_RTF = @DX_DOCDIR@/rtf # # endif DX_COND_rtf # # ## ------------------------------ ## # ## Rules specific for XML output. ## # ## ------------------------------ ## # # if DX_COND_xml # # DX_CLEAN_XML = @DX_DOCDIR@/xml # # endif DX_COND_xml # # ## ----------------------------- ## # ## Rules specific for PS output. ## # ## ----------------------------- ## # # if DX_COND_ps # # DX_CLEAN_PS = @DX_DOCDIR@/@PACKAGE@.ps # # DX_PS_GOAL = doxygen-ps # # doxygen-ps: @DX_DOCDIR@/@PACKAGE@.ps # # @DX_DOCDIR@/@PACKAGE@.ps: @DX_DOCDIR@/@PACKAGE@.tag # cd @DX_DOCDIR@/latex; \ # rm -f *.aux *.toc *.idx *.ind *.ilg *.log *.out; \ # $(DX_LATEX) refman.tex; \ # $(MAKEINDEX_PATH) refman.idx; \ # $(DX_LATEX) refman.tex; \ # countdown=5; \ # while $(DX_EGREP) 'Rerun (LaTeX|to get cross-references right)' \ # refman.log > /dev/null 2>&1 \ # && test $$countdown -gt 0; do \ # $(DX_LATEX) refman.tex; \ # countdown=`expr $$countdown - 1`; \ # done; \ # $(DX_DVIPS) -o ../@PACKAGE@.ps refman.dvi # # endif DX_COND_ps # # ## ------------------------------ ## # ## Rules specific for PDF output. ## # ## ------------------------------ ## # # if DX_COND_pdf # # DX_CLEAN_PDF = @DX_DOCDIR@/@PACKAGE@.pdf # # DX_PDF_GOAL = doxygen-pdf # # doxygen-pdf: @DX_DOCDIR@/@PACKAGE@.pdf # # @DX_DOCDIR@/@PACKAGE@.pdf: @DX_DOCDIR@/@PACKAGE@.tag # cd @DX_DOCDIR@/latex; \ # rm -f *.aux *.toc *.idx *.ind *.ilg *.log *.out; \ # $(DX_PDFLATEX) refman.tex; \ # $(DX_MAKEINDEX) refman.idx; \ # $(DX_PDFLATEX) refman.tex; \ # countdown=5; \ # while $(DX_EGREP) 'Rerun (LaTeX|to get cross-references right)' \ # refman.log > /dev/null 2>&1 \ # && test $$countdown -gt 0; do \ # $(DX_PDFLATEX) refman.tex; \ # countdown=`expr $$countdown - 1`; \ # done; \ # mv refman.pdf ../@PACKAGE@.pdf # # endif DX_COND_pdf # # ## ------------------------------------------------- ## # ## Rules specific for LaTeX (shared for PS and PDF). ## # ## ------------------------------------------------- ## # # if DX_COND_latex # # DX_CLEAN_LATEX = @DX_DOCDIR@/latex # # endif DX_COND_latex # # .PHONY: doxygen-run doxygen-doc $(DX_PS_GOAL) $(DX_PDF_GOAL) # # .INTERMEDIATE: doxygen-run $(DX_PS_GOAL) $(DX_PDF_GOAL) # # doxygen-run: @DX_DOCDIR@/@PACKAGE@.tag # # doxygen-doc: doxygen-run $(DX_PS_GOAL) $(DX_PDF_GOAL) # # @DX_DOCDIR@/@PACKAGE@.tag: $(DX_CONFIG) $(pkginclude_HEADERS) # rm -rf @DX_DOCDIR@ # $(DX_ENV) $(DX_DOXYGEN) $(srcdir)/$(DX_CONFIG) # # DX_CLEANFILES = \ # @DX_DOCDIR@/@PACKAGE@.tag \ # -r \ # $(DX_CLEAN_HTML) \ # $(DX_CLEAN_CHM) \ # $(DX_CLEAN_CHI) \ # $(DX_CLEAN_MAN) \ # $(DX_CLEAN_RTF) \ # $(DX_CLEAN_XML) \ # $(DX_CLEAN_PS) \ # $(DX_CLEAN_PDF) \ # $(DX_CLEAN_LATEX) # # endif DX_COND_doc # # ----- end aminclude.am --------------------------------------- # # LICENSE # # Copyright (c) 2009 Oren Ben-Kiki # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 10 ## ----------## ## Defaults. ## ## ----------## DX_ENV="" AC_DEFUN([DX_FEATURE_doc], ON) AC_DEFUN([DX_FEATURE_dot], ON) AC_DEFUN([DX_FEATURE_man], OFF) AC_DEFUN([DX_FEATURE_html], ON) AC_DEFUN([DX_FEATURE_chm], OFF) AC_DEFUN([DX_FEATURE_chi], OFF) AC_DEFUN([DX_FEATURE_rtf], OFF) AC_DEFUN([DX_FEATURE_xml], OFF) AC_DEFUN([DX_FEATURE_pdf], ON) AC_DEFUN([DX_FEATURE_ps], ON) ## --------------- ## ## Private macros. ## ## --------------- ## # DX_ENV_APPEND(VARIABLE, VALUE) # ------------------------------ # Append VARIABLE="VALUE" to DX_ENV for invoking doxygen. AC_DEFUN([DX_ENV_APPEND], [AC_SUBST([DX_ENV], ["$DX_ENV $1='$2'"])]) # DX_DIRNAME_EXPR # --------------- # Expand into a shell expression prints the directory part of a path. AC_DEFUN([DX_DIRNAME_EXPR], [[expr ".$1" : '\(\.\)[^/]*$' \| "x$1" : 'x\(.*\)/[^/]*$']]) # DX_IF_FEATURE(FEATURE, IF-ON, IF-OFF) # ------------------------------------- # Expands according to the M4 (static) status of the feature. AC_DEFUN([DX_IF_FEATURE], [ifelse(DX_FEATURE_$1, ON, [$2], [$3])]) # DX_REQUIRE_PROG(VARIABLE, PROGRAM) # ---------------------------------- # Require the specified program to be found for the DX_CURRENT_FEATURE to work. AC_DEFUN([DX_REQUIRE_PROG], [ AC_PATH_TOOL([$1], [$2]) if test "$DX_FLAG_[]DX_CURRENT_FEATURE$$1" = 1; then AC_MSG_WARN([$2 not found - will not DX_CURRENT_DESCRIPTION]) AC_SUBST(DX_FLAG_[]DX_CURRENT_FEATURE, 0) fi ]) # DX_TEST_FEATURE(FEATURE) # ------------------------ # Expand to a shell expression testing whether the feature is active. AC_DEFUN([DX_TEST_FEATURE], [test "$DX_FLAG_$1" = 1]) # DX_CHECK_DEPEND(REQUIRED_FEATURE, REQUIRED_STATE) # ------------------------------------------------- # Verify that a required features has the right state before trying to turn on # the DX_CURRENT_FEATURE. AC_DEFUN([DX_CHECK_DEPEND], [ test "$DX_FLAG_$1" = "$2" \ || AC_MSG_ERROR([doxygen-DX_CURRENT_FEATURE ifelse([$2], 1, requires, contradicts) doxygen-DX_CURRENT_FEATURE]) ]) # DX_CLEAR_DEPEND(FEATURE, REQUIRED_FEATURE, REQUIRED_STATE) # ---------------------------------------------------------- # Turn off the DX_CURRENT_FEATURE if the required feature is off. AC_DEFUN([DX_CLEAR_DEPEND], [ test "$DX_FLAG_$1" = "$2" || AC_SUBST(DX_FLAG_[]DX_CURRENT_FEATURE, 0) ]) # DX_FEATURE_ARG(FEATURE, DESCRIPTION, # CHECK_DEPEND, CLEAR_DEPEND, # REQUIRE, DO-IF-ON, DO-IF-OFF) # -------------------------------------------- # Parse the command-line option controlling a feature. CHECK_DEPEND is called # if the user explicitly turns the feature on (and invokes DX_CHECK_DEPEND), # otherwise CLEAR_DEPEND is called to turn off the default state if a required # feature is disabled (using DX_CLEAR_DEPEND). REQUIRE performs additional # requirement tests (DX_REQUIRE_PROG). Finally, an automake flag is set and # DO-IF-ON or DO-IF-OFF are called according to the final state of the feature. AC_DEFUN([DX_ARG_ABLE], [ AC_DEFUN([DX_CURRENT_FEATURE], [$1]) AC_DEFUN([DX_CURRENT_DESCRIPTION], [$2]) AC_ARG_ENABLE(doxygen-$1, [AS_HELP_STRING(DX_IF_FEATURE([$1], [--disable-doxygen-$1], [--enable-doxygen-$1]), DX_IF_FEATURE([$1], [don't $2], [$2]))], [ case "$enableval" in #( y|Y|yes|Yes|YES) AC_SUBST([DX_FLAG_$1], 1) $3 ;; #( n|N|no|No|NO) AC_SUBST([DX_FLAG_$1], 0) ;; #( *) AC_MSG_ERROR([invalid value '$enableval' given to doxygen-$1]) ;; esac ], [ AC_SUBST([DX_FLAG_$1], [DX_IF_FEATURE([$1], 1, 0)]) $4 ]) if DX_TEST_FEATURE([$1]); then $5 : fi if DX_TEST_FEATURE([$1]); then AM_CONDITIONAL(DX_COND_$1, :) $6 : else AM_CONDITIONAL(DX_COND_$1, false) $7 : fi ]) ## -------------- ## ## Public macros. ## ## -------------- ## # DX_XXX_FEATURE(DEFAULT_STATE) # ----------------------------- AC_DEFUN([DX_DOXYGEN_FEATURE], [AC_DEFUN([DX_FEATURE_doc], [$1])]) AC_DEFUN([DX_MAN_FEATURE], [AC_DEFUN([DX_FEATURE_man], [$1])]) AC_DEFUN([DX_HTML_FEATURE], [AC_DEFUN([DX_FEATURE_html], [$1])]) AC_DEFUN([DX_CHM_FEATURE], [AC_DEFUN([DX_FEATURE_chm], [$1])]) AC_DEFUN([DX_CHI_FEATURE], [AC_DEFUN([DX_FEATURE_chi], [$1])]) AC_DEFUN([DX_RTF_FEATURE], [AC_DEFUN([DX_FEATURE_rtf], [$1])]) AC_DEFUN([DX_XML_FEATURE], [AC_DEFUN([DX_FEATURE_xml], [$1])]) AC_DEFUN([DX_XML_FEATURE], [AC_DEFUN([DX_FEATURE_xml], [$1])]) AC_DEFUN([DX_PDF_FEATURE], [AC_DEFUN([DX_FEATURE_pdf], [$1])]) AC_DEFUN([DX_PS_FEATURE], [AC_DEFUN([DX_FEATURE_ps], [$1])]) # DX_INIT_DOXYGEN(PROJECT, [CONFIG-FILE], [OUTPUT-DOC-DIR]) # --------------------------------------------------------- # PROJECT also serves as the base name for the documentation files. # The default CONFIG-FILE is "Doxyfile" and OUTPUT-DOC-DIR is "doxygen-doc". AC_DEFUN([DX_INIT_DOXYGEN], [ # Files: AC_SUBST([DX_PROJECT], [$1]) AC_SUBST([DX_CONFIG], [ifelse([$2], [], Doxyfile, [$2])]) AC_SUBST([DX_DOCDIR], [ifelse([$3], [], doxygen-doc, [$3])]) # Environment variables used inside doxygen.cfg: DX_ENV_APPEND(SRCDIR, $srcdir) DX_ENV_APPEND(PROJECT, $DX_PROJECT) DX_ENV_APPEND(DOCDIR, $DX_DOCDIR) DX_ENV_APPEND(VERSION, $PACKAGE_VERSION) # Doxygen itself: DX_ARG_ABLE(doc, [generate any doxygen documentation], [], [], [DX_REQUIRE_PROG([DX_DOXYGEN], doxygen) DX_REQUIRE_PROG([DX_PERL], perl)], [DX_ENV_APPEND(PERL_PATH, $DX_PERL)]) # Dot for graphics: DX_ARG_ABLE(dot, [generate graphics for doxygen documentation], [DX_CHECK_DEPEND(doc, 1)], [DX_CLEAR_DEPEND(doc, 1)], [DX_REQUIRE_PROG([DX_DOT], dot)], [DX_ENV_APPEND(HAVE_DOT, YES) DX_ENV_APPEND(DOT_PATH, [`DX_DIRNAME_EXPR($DX_DOT)`])], [DX_ENV_APPEND(HAVE_DOT, NO)]) # Man pages generation: DX_ARG_ABLE(man, [generate doxygen manual pages], [DX_CHECK_DEPEND(doc, 1)], [DX_CLEAR_DEPEND(doc, 1)], [], [DX_ENV_APPEND(GENERATE_MAN, YES)], [DX_ENV_APPEND(GENERATE_MAN, NO)]) # RTF file generation: DX_ARG_ABLE(rtf, [generate doxygen RTF documentation], [DX_CHECK_DEPEND(doc, 1)], [DX_CLEAR_DEPEND(doc, 1)], [], [DX_ENV_APPEND(GENERATE_RTF, YES)], [DX_ENV_APPEND(GENERATE_RTF, NO)]) # XML file generation: DX_ARG_ABLE(xml, [generate doxygen XML documentation], [DX_CHECK_DEPEND(doc, 1)], [DX_CLEAR_DEPEND(doc, 1)], [], [DX_ENV_APPEND(GENERATE_XML, YES)], [DX_ENV_APPEND(GENERATE_XML, NO)]) # (Compressed) HTML help generation: DX_ARG_ABLE(chm, [generate doxygen compressed HTML help documentation], [DX_CHECK_DEPEND(doc, 1)], [DX_CLEAR_DEPEND(doc, 1)], [DX_REQUIRE_PROG([DX_HHC], hhc)], [DX_ENV_APPEND(HHC_PATH, $DX_HHC) DX_ENV_APPEND(GENERATE_HTML, YES) DX_ENV_APPEND(GENERATE_HTMLHELP, YES)], [DX_ENV_APPEND(GENERATE_HTMLHELP, NO)]) # Seperate CHI file generation. DX_ARG_ABLE(chi, [generate doxygen seperate compressed HTML help index file], [DX_CHECK_DEPEND(chm, 1)], [DX_CLEAR_DEPEND(chm, 1)], [], [DX_ENV_APPEND(GENERATE_CHI, YES)], [DX_ENV_APPEND(GENERATE_CHI, NO)]) # Plain HTML pages generation: DX_ARG_ABLE(html, [generate doxygen plain HTML documentation], [DX_CHECK_DEPEND(doc, 1) DX_CHECK_DEPEND(chm, 0)], [DX_CLEAR_DEPEND(doc, 1) DX_CLEAR_DEPEND(chm, 0)], [], [DX_ENV_APPEND(GENERATE_HTML, YES)], [DX_TEST_FEATURE(chm) || DX_ENV_APPEND(GENERATE_HTML, NO)]) # PostScript file generation: DX_ARG_ABLE(ps, [generate doxygen PostScript documentation], [DX_CHECK_DEPEND(doc, 1)], [DX_CLEAR_DEPEND(doc, 1)], [DX_REQUIRE_PROG([DX_LATEX], latex) DX_REQUIRE_PROG([DX_MAKEINDEX], makeindex) DX_REQUIRE_PROG([DX_DVIPS], dvips) DX_REQUIRE_PROG([DX_EGREP], egrep)]) # PDF file generation: DX_ARG_ABLE(pdf, [generate doxygen PDF documentation], [DX_CHECK_DEPEND(doc, 1)], [DX_CLEAR_DEPEND(doc, 1)], [DX_REQUIRE_PROG([DX_PDFLATEX], pdflatex) DX_REQUIRE_PROG([DX_MAKEINDEX], makeindex) DX_REQUIRE_PROG([DX_EGREP], egrep)]) # LaTeX generation for PS and/or PDF: if DX_TEST_FEATURE(ps) || DX_TEST_FEATURE(pdf); then AM_CONDITIONAL(DX_COND_latex, :) DX_ENV_APPEND(GENERATE_LATEX, YES) else AM_CONDITIONAL(DX_COND_latex, false) DX_ENV_APPEND(GENERATE_LATEX, NO) fi # Paper size for PS and/or PDF: AC_ARG_VAR(DOXYGEN_PAPER_SIZE, [a4wide (default), a4, letter, legal or executive]) case "$DOXYGEN_PAPER_SIZE" in #( "") AC_SUBST(DOXYGEN_PAPER_SIZE, "") ;; #( a4wide|a4|letter|legal|executive) DX_ENV_APPEND(PAPER_SIZE, $DOXYGEN_PAPER_SIZE) ;; #( *) AC_MSG_ERROR([unknown DOXYGEN_PAPER_SIZE='$DOXYGEN_PAPER_SIZE']) ;; esac #For debugging: #echo DX_FLAG_doc=$DX_FLAG_doc #echo DX_FLAG_dot=$DX_FLAG_dot #echo DX_FLAG_man=$DX_FLAG_man #echo DX_FLAG_html=$DX_FLAG_html #echo DX_FLAG_chm=$DX_FLAG_chm #echo DX_FLAG_chi=$DX_FLAG_chi #echo DX_FLAG_rtf=$DX_FLAG_rtf #echo DX_FLAG_xml=$DX_FLAG_xml #echo DX_FLAG_pdf=$DX_FLAG_pdf #echo DX_FLAG_ps=$DX_FLAG_ps #echo DX_ENV=$DX_ENV ]) bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/m4/gtest.m4000066400000000000000000000062241244507361200242250ustar00rootroot00000000000000dnl GTEST_LIB_CHECK([minimum version [, dnl action if found [,action if not found]]]) dnl dnl Check for the presence of the Google Test library, optionally at a minimum dnl version, and indicate a viable version with the HAVE_GTEST flag. It defines dnl standard variables for substitution including GTEST_CPPFLAGS, dnl GTEST_CXXFLAGS, GTEST_LDFLAGS, and GTEST_LIBS. It also defines dnl GTEST_VERSION as the version of Google Test found. Finally, it provides dnl optional custom action slots in the event GTEST is found or not. AC_DEFUN([GTEST_LIB_CHECK], [ dnl Provide a flag to enable or disable Google Test usage. AC_ARG_ENABLE([gtest], [AS_HELP_STRING([--enable-gtest], [Enable tests using the Google C++ Testing Framework. (Default is disabled.)])], [], [enable_gtest=no]) AC_ARG_VAR([GTEST_CONFIG], [The exact path of Google Test's 'gtest-config' script.]) AC_ARG_VAR([GTEST_CPPFLAGS], [C-like preprocessor flags for Google Test.]) AC_ARG_VAR([GTEST_CXXFLAGS], [C++ compile flags for Google Test.]) AC_ARG_VAR([GTEST_LDFLAGS], [Linker path and option flags for Google Test.]) AC_ARG_VAR([GTEST_LIBS], [Library linking flags for Google Test.]) AC_ARG_VAR([GTEST_VERSION], [The version of Google Test available.]) HAVE_GTEST="no" AS_IF([test "x${enable_gtest}" != "xno"], [AC_MSG_CHECKING([for 'gtest-config']) AS_IF([test "x${enable_gtest}" != "xyes"], [AS_IF([test -x "${enable_gtest}/scripts/gtest-config"], [GTEST_CONFIG="${enable_gtest}/scripts/gtest-config"], [GTEST_CONFIG="${enable_gtest}/bin/gtest-config"]) AS_IF([test -x "${GTEST_CONFIG}"], [], [AC_MSG_RESULT([no]) AC_MSG_ERROR([dnl Unable to locate either a built or installed Google Test. The specific location '${enable_gtest}' was provided for a built or installed Google Test, but no 'gtest-config' script could be found at this location.]) ])], [AC_PATH_PROG([GTEST_CONFIG], [gtest-config])]) AS_IF([test -x "${GTEST_CONFIG}"], [AC_MSG_RESULT([${GTEST_CONFIG}]) m4_ifval([$1], [_gtest_min_version="--min-version=$1" AC_MSG_CHECKING([for Google Test at least version >= $1])], [_gtest_min_version="--min-version=0" AC_MSG_CHECKING([for Google Test])]) AS_IF([${GTEST_CONFIG} ${_gtest_min_version}], [AC_MSG_RESULT([yes]) HAVE_GTEST='yes'], [AC_MSG_RESULT([no])])], [AC_MSG_RESULT([no])]) AS_IF([test "x${HAVE_GTEST}" = "xyes"], [GTEST_CPPFLAGS=`${GTEST_CONFIG} --cppflags` GTEST_CXXFLAGS=`${GTEST_CONFIG} --cxxflags` GTEST_LDFLAGS=`${GTEST_CONFIG} --ldflags` GTEST_LIBS=`${GTEST_CONFIG} --libs` GTEST_VERSION=`${GTEST_CONFIG} --version` AC_DEFINE([HAVE_GTEST],[1],[Defined when Google Test is available.])], [AS_IF([test "x${enable_gtest}" = "xyes"], [AC_MSG_ERROR([dnl Google Test was enabled, but no viable version could be found.]) ])])]) AC_SUBST([HAVE_GTEST]) AM_CONDITIONAL([HAVE_GTEST],[test "x$HAVE_GTEST" = "xyes"]) AS_IF([test "x$HAVE_GTEST" = "xyes"], [m4_ifval([$2], [$2])], [m4_ifval([$3], [$3])]) ]) bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/scripts/000077500000000000000000000000001244507361200240005ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/scripts/log4cxx.conf000066400000000000000000000032461244507361200262440ustar00rootroot00000000000000# # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # log4j.appender.rootAppender=org.apache.log4j.ConsoleAppender log4j.appender.rootAppender.layout=org.apache.log4j.BasicLayout log4j.appender.hedwig=org.apache.log4j.RollingFileAppender #log4j.appender.hedwig=org.apache.log4j.ConsoleAppender log4j.appender.hedwig.fileName=./testLog.log log4j.appender.hedwig.layout=org.apache.log4j.PatternLayout log4j.appender.hedwig.layout.ConversionPattern=[%d{%H:%M:%S.%l}] %t %p %c - %m%n log4j.appender.hedwigtest=org.apache.log4j.RollingFileAppender #log4j.appender.hedwigtest=org.apache.log4j.ConsoleAppender log4j.appender.hedwigtest.fileName=./testLog.log log4j.appender.hedwigtest.layout=org.apache.log4j.PatternLayout log4j.appender.hedwigtest.layout.ConversionPattern=[%d{%H:%M:%S.%l}] %t %p %c - %m%n # category log4j.category.hedwig=DEBUG, hedwig log4j.category.hedwigtest=DEBUG, hedwigtest log4j.rootCategory=OFF #log4j.category.hedwig.channel=ERROR bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/scripts/network-delays.sh000066400000000000000000000045421244507361200273110ustar00rootroot00000000000000#!/bin/bash # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # setup_delays() { UNAME=`uname -s` echo "Setting delay to ${1}ms" case "$UNAME" in Darwin|FreeBSD) sudo ipfw pipe 1 config delay ${1}ms sudo ipfw add pipe 1 dst-port 4081 sudo ipfw add pipe 1 src-port 4081 sudo ipfw add pipe 1 dst-port 4082 sudo ipfw add pipe 1 src-port 4082 sudo ipfw add pipe 1 dst-port 4083 sudo ipfw add pipe 1 src-port 4083 ;; Linux) sudo tc qdisc add dev lo root handle 1: prio sudo tc qdisc add dev lo parent 1:3 handle 30: netem delay ${1}ms sudo tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip dport 4081 0xffff flowid 1:3 sudo tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip dport 4081 0xffff flowid 1:3 sudo tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4082 0xffff flowid 1:3 sudo tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4082 0xffff flowid 1:3 sudo tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4083 0xffff flowid 1:3 sudo tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4083 0xffff flowid 1:3 ;; *) echo "Unknown system type, $UNAME, only Linux, Darwin & FreeBSD supported" ;; esac } clear_delays() { UNAME=`uname -s` case "$UNAME" in Darwin|FreeBSD) echo "Flushing ipfw" sudo ipfw -f -q flush ;; Linux) echo "Clearing delay" sudo tc qdisc del dev lo root ;; *) echo "Unknown system type, $UNAME, only Linux, Darwin & FreeBSD supported" ;; esac } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/scripts/server-control.sh000066400000000000000000000113471244507361200273260ustar00rootroot00000000000000#!/usr/bin/env bash # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # BASE=../../../../../ BKSCRIPT=$BASE/bookkeeper-server/bin/bookkeeper HWSCRIPT=$BASE/hedwig-server/bin/hedwig ZKCLIENT=org.apache.zookeeper.ZooKeeperMain check_bk_down() { NUM_UP=100 for i in 0 1 2 3 4 5 6 7 8 9; do NUM_UP=`$BKSCRIPT $ZKCLIENT ls /ledgers/available 2> /dev/null | awk 'BEGIN{SERVERS=0} /^\[/ { gsub(/[,\[\]]/, ""); SERVERS=NF} END{ print SERVERS }'` if [ $NUM_UP == 0 ]; then break; fi sleep 1 done if [ $NUM_UP != 0 ]; then echo "Warning: Couldn't stop all bookies" exit 1; fi } check_bk_up() { NUM_BOOKIES=$1 NUM_UP=0 for i in 0 1 2 3 4 5 6 7 8 9; do NUM_UP=`$BKSCRIPT $ZKCLIENT ls /ledgers/available 2> /dev/null | awk 'BEGIN{SERVERS=0} /^\[/ { gsub(/[,\[\]]/, ""); SERVERS=NF} END{ print SERVERS }'` if [ $NUM_UP == $NUM_BOOKIES ]; then break; fi sleep 1 done if [ $NUM_UP != $NUM_BOOKIES ]; then echo "Couldn't start bookkeeper" exit 1; fi } check_hw_down() { REGION=$1 NUM_UP=100 for i in 0 1 2 3 4 5 6 7 8 9; do NUM_UP=`$BKSCRIPT $ZKCLIENT ls /hedwig/$REGION/hosts 2> /dev/null | awk 'BEGIN{SERVERS=0} /^\[/ { gsub(/[,\[\]]/, ""); SERVERS=NF} END{ print SERVERS }'` if [ $NUM_UP == 0 ]; then break; fi sleep 1 done if [ $NUM_UP != 0 ]; then echo "Warning: Couldn't stop all hedwig servers" exit 1; fi } check_hw_up() { REGION=$1 NUM_SERVERS=$2 NUM_UP=0 for i in 0 1 2 3 4 5 6 7 8 9; do NUM_UP=`$BKSCRIPT $ZKCLIENT ls /hedwig/$REGION/hosts 2> /dev/null | awk 'BEGIN{SERVERS=0} /^\[/ { gsub(/[,\[\]]/, ""); SERVERS=NF} END{ print SERVERS }'` if [ $NUM_UP == $NUM_SERVERS ]; then break; fi sleep 1 done if [ $NUM_UP != $NUM_SERVERS ]; then echo "Couldn't start hedwig" exit 1; fi } start_hw_server () { REGION=$1 COUNT=$2 PORT=$((4080+$COUNT)) SSL_PORT=$((9876+$COUNT)) export HEDWIG_LOG_CONF=/tmp/hw-log4j-$COUNT.properties cat > $HEDWIG_LOG_CONF < $HEDWIG_SERVER_CONF <&1 > hwoutput.$COUNT.log & echo $! > hwprocess.$COUNT.pid } start_cluster() { if [ -e bkprocess.pid ] || [ `ls hwprocess.*.pid 2> /dev/null | wc -l` != 0 ]; then stop_cluster; fi $BKSCRIPT localbookie 3 2>&1 > bkoutput.log & echo $! > bkprocess.pid check_bk_up 3 for i in 1 2 3; do start_hw_server CppUnitTest $i done check_hw_up CppUnitTest 3 } stop_cluster() { for i in hwprocess.*.pid; do if [ ! -e $i ]; then continue; fi kill `cat $i`; rm $i; done check_hw_down if [ ! -e bkprocess.pid ]; then return; fi kill `cat bkprocess.pid` rm bkprocess.pid check_bk_down } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/scripts/tester.sh000066400000000000000000000100671244507361200256460ustar00rootroot00000000000000#!/bin/bash # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # cd `dirname $0`; export LOG4CXX_CONF=`pwd`/log4cxx.conf source network-delays.sh source server-control.sh runtest() { if [ "z$HEDWIG_NETWORK_DELAY" != "z" ]; then setup_delays $HEDWIG_NETWORK_DELAY fi stop_cluster; start_cluster; if [ "z$2" != "z" ]; then ../test/hedwigtest -s true -m true else if [ "z$1" == "zssl" ]; then ../test/hedwigtest -s true elif [ "z$1" == "zmultiplex" ]; then ../test/hedwigtest -m true else ../test/hedwigtest fi fi RESULT=$? stop_cluster; if [ "z$HEDWIG_NETWORK_DELAY" != "z" ]; then clear_delays else cat < Run a single test tester.sh start-cluster Start a hedwig cluster tester.sh stop-cluster Stops a hedwig cluster tester.sh setup-delays Set the millisecond delay for accessing the hedwig servers for the tests. tester.sh clear-delays Clear the delay for accessing the hedwig servers. EOF ;; esac bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/000077500000000000000000000000001244507361200232705ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/Makefile.am000066400000000000000000000035551244507361200253340ustar00rootroot00000000000000# # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # if HAVE_GTEST bin_PROGRAMS = hedwigtest hedwigtest_SOURCES = main.cpp utiltest.cpp publishtest.cpp subscribetest.cpp pubsubtest.cpp messageboundtest.cpp messagefiltertest.cpp throttledeliverytest.cpp multiplextest.cpp hedwigtest_CPPFLAGS = -I$(top_srcdir)/inc $(DEPS_CFLAGS) $(GTEST_CPPFLAGS) $(BOOST_CPPFLAGS) hedwigtest_CXXFLAGS = $(GTEST_CXXFLAGS) hedwigtest_LDADD = $(DEPS_LIBS) $(GTEST_LIBS) -L$(top_builddir)/lib -lhedwig01 hedwigtest_LDFLAGS = -no-undefined $(BOOST_ASIO_LIB) $(BOOST_LDFLAGS) $(BOOST_THREAD_LIB) $(GTEST_LDFLAGS) check: hedwigtest bash ../scripts/tester.sh all simplesslcheck: hedwigtest bash ../scripts/tester.sh ssl-simple-test simplecheck: hedwigtest bash ../scripts/tester.sh simple-test multiplexsslcheck: hedwigtest bash ../scripts/tester.sh ssl-multiplex-test multiplexcheck: hedwigtest bash ../scripts/tester.sh multiplex-test else check: @echo "\n\nYou haven't configured with gtest. Run the ./configure command with --enable-gtest=" @echo "i.e. ./configure --enable-gtest=/home/user/src/gtest-1.6.0" @echo "See the README for more info\n\n\b" endif bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/main.cpp000066400000000000000000000053711244507361200247260ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "../lib/channel.h" #include "../lib/util.h" #include #include #include #include #include #include #include #include "util.h" #include "gtest/gtest.h" bool TestServerConfiguration::isSSL = false; std::string TestServerConfiguration::certFile = ""; bool TestServerConfiguration::multiplexing = false; int main( int argc, char **argv) { try { if (getenv("LOG4CXX_CONF") == NULL) { std::cerr << "Set LOG4CXX_CONF in your environment to get logging." << std::endl; log4cxx::BasicConfigurator::configure(); } else { log4cxx::PropertyConfigurator::configure(getenv("LOG4CXX_CONF")); } } catch (std::exception &e) { std::cerr << "exception caught while configuring log4cpp via : " << e.what() << std::endl; } catch (...) { std::cerr << "unknown exception while configuring log4cpp vi'." << std::endl; } // Enable SSL for testing int opt; while((opt = getopt(argc,argv,"s:c:m:")) > 0) { switch(opt) { case 's': if (std::string(optarg) == "true") { std::cout << "run in ssl mode...." << std::endl; TestServerConfiguration::isSSL = true; } else { TestServerConfiguration::isSSL = false; } break; case 'm': if (std::string(optarg) == "true") { std::cout << "run in multiplexing mode ..." << std::endl; TestServerConfiguration::multiplexing = true; } else { TestServerConfiguration::multiplexing = false; } break; case 'c': std::cout << "use cert file :" << optarg << std::endl; TestServerConfiguration::certFile = std::string(optarg); break; }//switch }//while ::testing::InitGoogleTest(&argc, argv); int ret = RUN_ALL_TESTS(); google::protobuf::ShutdownProtobufLibrary(); return ret; } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/messageboundtest.cpp000066400000000000000000000157721244507361200273640ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "gtest/gtest.h" #include "../lib/clientimpl.h" #include #include #include #include #include #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); class MessageBoundConfiguration : public TestServerConfiguration { public: MessageBoundConfiguration() : TestServerConfiguration() {} virtual int getInt(const std::string& key, int defaultVal) const { if (key == Configuration::SUBSCRIPTION_MESSAGE_BOUND) { return 5; } return TestServerConfiguration::getInt(key, defaultVal); } }; class MessageBoundOrderCheckingMessageHandlerCallback : public Hedwig::MessageHandlerCallback { public: MessageBoundOrderCheckingMessageHandlerCallback(const int nextExpectedMsg) : nextExpectedMsg(nextExpectedMsg) { } virtual void consume(const std::string& topic, const std::string& subscriberId, const Hedwig::Message& msg, Hedwig::OperationCallbackPtr& callback) { boost::lock_guard lock(mutex); int thisMsg = atoi(msg.body().c_str()); LOG4CXX_DEBUG(logger, "received message " << thisMsg); if (thisMsg == nextExpectedMsg) { nextExpectedMsg++; } // checking msgId callback->operationComplete(); } int nextExpected() { return nextExpectedMsg; } protected: boost::mutex mutex; int nextExpectedMsg; }; void sendXExpectLastY(Hedwig::Publisher& pub, Hedwig::Subscriber& sub, const std::string& topic, const std::string& subid, int X, int Y) { for (int i = 0; i < X;) { std::stringstream oss; oss << i; try { pub.publish(topic, oss.str()); ++i; } catch (std::exception &e) { LOG4CXX_WARN(logger, "Exception when publishing message " << i << " : " << e.what()); } } sub.subscribe(topic, subid, Hedwig::SubscribeRequest::ATTACH); MessageBoundOrderCheckingMessageHandlerCallback* cb = new MessageBoundOrderCheckingMessageHandlerCallback(X - Y); Hedwig::MessageHandlerCallbackPtr handler(cb); sub.startDelivery(topic, subid, handler); for (int i = 0; i < 100; i++) { if (cb->nextExpected() == X) { break; } else { sleep(1); } } ASSERT_TRUE(cb->nextExpected() == X); sub.stopDelivery(topic, subid); sub.closeSubscription(topic, subid); } TEST(MessageBoundTest, testMessageBound) { Hedwig::Configuration* conf = new MessageBoundConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); std::string topic = "testMessageBound"; std::string subid = "testSubId"; sub.subscribe(topic, subid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 100, 5); } TEST(MessageBoundTest, testMultipleSubscribers) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); Hedwig::SubscriptionOptions options5; options5.set_messagebound(5); options5.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); Hedwig::SubscriptionOptions options20; options20.set_messagebound(20); options20.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); Hedwig::SubscriptionOptions optionsUnlimited; optionsUnlimited.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); std::string topic = "testMultipleSubscribers"; std::string subid5 = "testSubId5"; std::string subid20 = "testSubId20"; std::string subidUnlimited = "testSubIdUnlimited"; sub.subscribe(topic, subid5, options5); sub.closeSubscription(topic, subid5); sendXExpectLastY(pub, sub, topic, subid5, 1000, 5); sub.subscribe(topic, subid20, options20); sub.closeSubscription(topic, subid20); sendXExpectLastY(pub, sub, topic, subid20, 1000, 20); sub.subscribe(topic, subidUnlimited, optionsUnlimited); sub.closeSubscription(topic, subidUnlimited); sendXExpectLastY(pub, sub, topic, subidUnlimited, 1000, 1000); sub.unsubscribe(topic, subidUnlimited); sendXExpectLastY(pub, sub, topic, subid20, 1000, 20); sub.unsubscribe(topic, subid20); sendXExpectLastY(pub, sub, topic, subid5, 1000, 5); sub.unsubscribe(topic, subid5); } TEST(MessageBoundTest, testUpdateMessageBound) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); Hedwig::SubscriptionOptions options5; options5.set_messagebound(5); options5.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); Hedwig::SubscriptionOptions options20; options20.set_messagebound(20); options20.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); Hedwig::SubscriptionOptions options10; options10.set_messagebound(10); options10.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); std::string topic = "testUpdateMessageBound"; std::string subid = "updateSubId"; sub.subscribe(topic, subid, options5); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 50, 5); // update bound to 20 sub.subscribe(topic, subid, options20); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 50, 20); // update bound to 10 sub.subscribe(topic, subid, options10); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 50, 10); // message bound is not provided, no update sub.subscribe(topic, subid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 50, 10); } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/messagefiltertest.cpp000066400000000000000000000173401244507361200275330ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "gtest/gtest.h" #include "../lib/clientimpl.h" #include #include #include #include #include #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); class MessageFilterConfiguration : public TestServerConfiguration { public: MessageFilterConfiguration() : TestServerConfiguration() {} virtual bool getBool(const std::string& key, bool defaultVal) const { if (key == Configuration::SUBSCRIBER_AUTOCONSUME) { return false; } else { return TestServerConfiguration::getBool(key, defaultVal); } } }; class ModMessageFilter : public Hedwig::ClientMessageFilter { public: ModMessageFilter() : mod(0) { } virtual void setSubscriptionPreferences(const std::string& topic, const std::string& subscriberId, const Hedwig::SubscriptionPreferencesPtr& preferences) { if (!preferences->has_options()) { return; } const Hedwig::Map& userOptions = preferences->options(); int numOpts = userOptions.entries_size(); for (int i=0; i lock(mutex); int value = atoi(msg.body().c_str()); if(value > start) { LOG4CXX_DEBUG(logger, "received message " << value); if (value == nextValue) { nextValue += gap; } } callback->operationComplete(); if (doConsume) { sub.consume(topic, subscriberId, msg.msgid()); } } int nextExpected() { return nextValue; } protected: boost::mutex mutex; Hedwig::Subscriber& sub; int start; int nextValue; int gap; bool doConsume; }; void publishNums(Hedwig::Publisher& pub, const std::string& topic, int start, int num, int M) { for (int i=1; i<=num; i++) { int value = start + i; int mod = value % M; std::stringstream valSS; valSS << value; std::stringstream modSS; modSS << mod; Hedwig::Message msg; msg.set_body(valSS.str()); Hedwig::MessageHeader* header = msg.mutable_header(); Hedwig::Map* properties = header->mutable_properties(); Hedwig::Map_Entry* entry = properties->add_entries(); entry->set_key("mod"); entry->set_value(modSS.str()); pub.publish(topic, msg); } } void receiveNumModM(Hedwig::Subscriber& sub, const std::string& topic, const std::string& subid, int start, int num, int M, bool consume) { Hedwig::SubscriptionOptions options; options.set_createorattach(Hedwig::SubscribeRequest::ATTACH); Hedwig::Map* userOptions = options.mutable_options(); Hedwig::Map_Entry* opt = userOptions->add_entries(); opt->set_key("MOD"); std::stringstream modSS; modSS << M; opt->set_value(modSS.str()); sub.subscribe(topic, subid, options); int base = start + M - start % M; int end = base + num * M; GapCheckingMessageHandlerCallback * cb = new GapCheckingMessageHandlerCallback(sub, start, base, M, consume); Hedwig::MessageHandlerCallbackPtr handler(cb); Hedwig::ClientMessageFilterPtr filter(new ModMessageFilter()); sub.startDeliveryWithFilter(topic, subid, handler, filter); for (int i = 0; i < 100; i++) { if (cb->nextExpected() == end) { break; } else { sleep(1); } } ASSERT_TRUE(cb->nextExpected() == end); sub.stopDelivery(topic, subid); sub.closeSubscription(topic, subid); } TEST(MessageFilterTest, testNullMessageFilter) { Hedwig::Configuration* conf = new MessageFilterConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); std::string topic = "testNullMessageFilter"; std::string subid = "myTestSubid"; sub.subscribe(topic, subid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); GapCheckingMessageHandlerCallback * cb = new GapCheckingMessageHandlerCallback(sub, 0, 0, 0, true); Hedwig::MessageHandlerCallbackPtr handler(cb); Hedwig::ClientMessageFilterPtr filter(new ModMessageFilter()); ASSERT_THROW(sub.startDeliveryWithFilter(topic, subid, handler, Hedwig::ClientMessageFilterPtr()), Hedwig::NullMessageFilterException); ASSERT_THROW(sub.startDeliveryWithFilter(topic, subid, Hedwig::MessageHandlerCallbackPtr(), filter), Hedwig::NullMessageHandlerException); } TEST(MessageFilterTest, testMessageFilter) { Hedwig::Configuration* conf = new MessageFilterConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); std::string topic = "testMessageFilter"; std::string subid = "myTestSubid"; sub.subscribe(topic, subid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.closeSubscription(topic, subid); publishNums(pub, topic, 0, 100, 2); receiveNumModM(sub, topic, subid, 0, 50, 2, true); } TEST(MessageFilterTest, testUpdateMessageFilter) { Hedwig::Configuration* conf = new MessageFilterConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); std::string topic = "testUpdateMessageFilter"; std::string subid = "myTestSubid"; sub.subscribe(topic, subid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.closeSubscription(topic, subid); publishNums(pub, topic, 0, 100, 2); receiveNumModM(sub, topic, subid, 0, 50, 2, false); receiveNumModM(sub, topic, subid, 0, 25, 4, false); receiveNumModM(sub, topic, subid, 0, 33, 3, false); } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/multiplextest.cpp000066400000000000000000000333651244507361200267310ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "gtest/gtest.h" #include "../lib/clientimpl.h" #include #include #include #include #include #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); class MultiplexConfiguration : public TestServerConfiguration { public: MultiplexConfiguration() : TestServerConfiguration() {} virtual bool getBool(const std::string& key, bool defaultVal) const { if (key == Configuration::SUBSCRIBER_AUTOCONSUME) { return false; } else if (key == Configuration::SUBSCRIPTION_CHANNEL_SHARING_ENABLED) { return true; } else { return TestServerConfiguration::getBool(key, defaultVal); } } }; class MultiplexMessageHandlerCallback : public Hedwig::MessageHandlerCallback { public: MultiplexMessageHandlerCallback(Hedwig::Subscriber& sub, const int start, const int numMsgsAtFirstRun, const bool receiveSecondRun, const int numMsgsAtSecondRun) : sub(sub), next(start), start(start), numMsgsAtFirstRun(numMsgsAtFirstRun), numMsgsAtSecondRun(numMsgsAtSecondRun), receiveSecondRun(receiveSecondRun) { } virtual void consume(const std::string& topic, const std::string& subscriberId, const Hedwig::Message& msg, Hedwig::OperationCallbackPtr& callback) { const int value = atoi(msg.body().c_str()); LOG4CXX_DEBUG(logger, "received message " << value); boost::lock_guard lock(mutex); if (value == next) { ++next; } else { LOG4CXX_ERROR(logger, "Did not receive expected value " << next << ", got " << value); next = 0; firstLatch.setSuccess(false); firstLatch.notify(); secondLatch.setSuccess(false); secondLatch.notify(); } if (numMsgsAtFirstRun + start == next) { firstLatch.setSuccess(true); firstLatch.notify(); } if (receiveSecondRun) { if (numMsgsAtFirstRun + numMsgsAtSecondRun + start == next) { secondLatch.setSuccess(true); secondLatch.notify(); } } else { if (numMsgsAtFirstRun + start + 1 == next) { secondLatch.setSuccess(true); secondLatch.notify(); } } callback->operationComplete(); sub.consume(topic, subscriberId, msg.msgid()); } void checkFirstRun() { firstLatch.timed_wait(10000); ASSERT_TRUE(firstLatch.wasSuccess()); ASSERT_EQ(numMsgsAtFirstRun + start, next); } void checkSecondRun() { if (receiveSecondRun) { secondLatch.timed_wait(10000); ASSERT_TRUE(secondLatch.wasSuccess()); ASSERT_EQ(numMsgsAtFirstRun + numMsgsAtSecondRun + start, next); } else { secondLatch.timed_wait(3000); ASSERT_TRUE(!secondLatch.wasSuccess()); ASSERT_EQ(numMsgsAtFirstRun + start, next); } } protected: Hedwig::Subscriber& sub; boost::mutex mutex; int next; const int start; const int numMsgsAtFirstRun; const int numMsgsAtSecondRun; SimpleWaitCondition firstLatch; SimpleWaitCondition secondLatch; const bool receiveSecondRun; }; class MultiplexThrottleDeliveryMessageHandlerCallback : public Hedwig::MessageHandlerCallback { public: MultiplexThrottleDeliveryMessageHandlerCallback(Hedwig::Subscriber& sub, const int start, const int numMsgs, const bool enableThrottle, const int numMsgsThrottle) : sub(sub), next(start), start(start), numMsgs(numMsgs), numMsgsThrottle(numMsgsThrottle), enableThrottle(enableThrottle) { } virtual void consume(const std::string& topic, const std::string& subscriberId, const Hedwig::Message& msg, Hedwig::OperationCallbackPtr& callback) { const int value = atoi(msg.body().c_str()); LOG4CXX_DEBUG(logger, "received message " << value); boost::lock_guard lock(mutex); if (value == next) { ++next; } else { LOG4CXX_ERROR(logger, "Did not receive expected value " << next << ", got " << value); next = 0; throttleLatch.setSuccess(false); throttleLatch.notify(); nonThrottleLatch.setSuccess(false); nonThrottleLatch.notify(); } if (next == numMsgsThrottle + start + 1) { throttleLatch.setSuccess(true); throttleLatch.notify(); } else if (next == numMsgs + 1) { nonThrottleLatch.setSuccess(true); nonThrottleLatch.notify(); } callback->operationComplete(); if (enableThrottle) { if (next > numMsgsThrottle + start) { sub.consume(topic, subscriberId, msg.msgid()); } } else { sub.consume(topic, subscriberId, msg.msgid()); } } void checkThrottle() { if (enableThrottle) { throttleLatch.timed_wait(3000); ASSERT_TRUE(!throttleLatch.wasSuccess()); ASSERT_EQ(numMsgsThrottle + start, next); } else { throttleLatch.timed_wait(10000); ASSERT_TRUE(throttleLatch.wasSuccess()); nonThrottleLatch.timed_wait(10000); ASSERT_TRUE(nonThrottleLatch.wasSuccess()); ASSERT_EQ(numMsgs + start, next); } } void checkAfterThrottle() { if (enableThrottle) { nonThrottleLatch.timed_wait(10000); ASSERT_TRUE(nonThrottleLatch.wasSuccess()); ASSERT_EQ(numMsgs + start, next); } } protected: Hedwig::Subscriber& sub; boost::mutex mutex; int next; const int start; const int numMsgs; const int numMsgsThrottle; const bool enableThrottle; SimpleWaitCondition throttleLatch; SimpleWaitCondition nonThrottleLatch; }; TEST(MultiplexTest, testStopDelivery) { Hedwig::Configuration* conf = new MultiplexConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); const int numMsgs = 20; std::string topic1 = "testStopDelivery-1"; std::string subid1 = "mysubid-1"; std::string topic2 = "testStopDelivery-2"; std::string subid2 = "mysubid-2"; MultiplexMessageHandlerCallback * cb11 = new MultiplexMessageHandlerCallback(sub, 1, numMsgs, true, numMsgs); MultiplexMessageHandlerCallback * cb12 = new MultiplexMessageHandlerCallback(sub, 1, numMsgs, false, 0); MultiplexMessageHandlerCallback * cb21 = new MultiplexMessageHandlerCallback(sub, 1, numMsgs, false, 0); MultiplexMessageHandlerCallback * cb22 = new MultiplexMessageHandlerCallback(sub, 1, numMsgs, true, numMsgs); Hedwig::MessageHandlerCallbackPtr handler11(cb11); Hedwig::MessageHandlerCallbackPtr handler12(cb12); Hedwig::MessageHandlerCallbackPtr handler21(cb21); Hedwig::MessageHandlerCallbackPtr handler22(cb22); sub.subscribe(topic1, subid1, Hedwig::SubscribeRequest::CREATE); sub.subscribe(topic1, subid2, Hedwig::SubscribeRequest::CREATE); sub.subscribe(topic2, subid1, Hedwig::SubscribeRequest::CREATE); sub.subscribe(topic2, subid2, Hedwig::SubscribeRequest::CREATE); // start deliveries sub.startDelivery(topic1, subid1, handler11); sub.startDelivery(topic1, subid2, handler12); sub.startDelivery(topic2, subid1, handler21); sub.startDelivery(topic2, subid2, handler22); // first publish for (int i = 1; i <= numMsgs; i++) { std::stringstream oss; oss << i; pub.publish(topic1, oss.str()); pub.publish(topic2, oss.str()); } // check first run cb11->checkFirstRun(); cb12->checkFirstRun(); cb21->checkFirstRun(); cb22->checkFirstRun(); // stop delivery for and sub.stopDelivery(topic1, subid2); sub.stopDelivery(topic2, subid1); // second publish for (int i = numMsgs+1; i <= 2*numMsgs; i++) { std::stringstream oss; oss << i; pub.publish(topic1, oss.str()); pub.publish(topic2, oss.str()); } cb11->checkSecondRun(); cb12->checkSecondRun(); cb21->checkSecondRun(); cb22->checkSecondRun(); } TEST(MultiplexTest, testCloseSubscription) { Hedwig::Configuration* conf = new MultiplexConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); const int numMsgs = 20; std::string topic1 = "testCloseSubscription-1"; std::string subid1 = "mysubid-1"; std::string topic2 = "testCloseSubscription-2"; std::string subid2 = "mysubid-2"; MultiplexMessageHandlerCallback * cb11 = new MultiplexMessageHandlerCallback(sub, 1, numMsgs, true, numMsgs); MultiplexMessageHandlerCallback * cb12 = new MultiplexMessageHandlerCallback(sub, 1, numMsgs, false, 0); MultiplexMessageHandlerCallback * cb21 = new MultiplexMessageHandlerCallback(sub, 1, numMsgs, false, 0); MultiplexMessageHandlerCallback * cb22 = new MultiplexMessageHandlerCallback(sub, 1, numMsgs, true, numMsgs); Hedwig::MessageHandlerCallbackPtr handler11(cb11); Hedwig::MessageHandlerCallbackPtr handler12(cb12); Hedwig::MessageHandlerCallbackPtr handler21(cb21); Hedwig::MessageHandlerCallbackPtr handler22(cb22); sub.subscribe(topic1, subid1, Hedwig::SubscribeRequest::CREATE); sub.subscribe(topic1, subid2, Hedwig::SubscribeRequest::CREATE); sub.subscribe(topic2, subid1, Hedwig::SubscribeRequest::CREATE); sub.subscribe(topic2, subid2, Hedwig::SubscribeRequest::CREATE); // start deliveries sub.startDelivery(topic1, subid1, handler11); sub.startDelivery(topic1, subid2, handler12); sub.startDelivery(topic2, subid1, handler21); sub.startDelivery(topic2, subid2, handler22); // first publish for (int i = 1; i <= numMsgs; i++) { std::stringstream oss; oss << i; pub.publish(topic1, oss.str()); pub.publish(topic2, oss.str()); } // check first run cb11->checkFirstRun(); cb12->checkFirstRun(); cb21->checkFirstRun(); cb22->checkFirstRun(); // close subscription for and sub.closeSubscription(topic1, subid2); sub.closeSubscription(topic2, subid1); // second publish for (int i = numMsgs+1; i <= 2*numMsgs; i++) { std::stringstream oss; oss << i; pub.publish(topic1, oss.str()); pub.publish(topic2, oss.str()); } cb11->checkSecondRun(); cb12->checkSecondRun(); cb21->checkSecondRun(); cb22->checkSecondRun(); } TEST(MultiplexTest, testThrottle) { Hedwig::Configuration* conf = new MultiplexConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); const int numMsgs = 10; std::string topic1 = "testThrottle-1"; std::string subid1 = "mysubid-1"; std::string topic2 = "testThrottle-2"; std::string subid2 = "mysubid-2"; MultiplexThrottleDeliveryMessageHandlerCallback * cb11 = new MultiplexThrottleDeliveryMessageHandlerCallback(sub, 1, 3*numMsgs, false, numMsgs); MultiplexThrottleDeliveryMessageHandlerCallback * cb12 = new MultiplexThrottleDeliveryMessageHandlerCallback(sub, 1, 3*numMsgs, true, numMsgs); MultiplexThrottleDeliveryMessageHandlerCallback * cb21 = new MultiplexThrottleDeliveryMessageHandlerCallback(sub, 1, 3*numMsgs, true, numMsgs); MultiplexThrottleDeliveryMessageHandlerCallback * cb22 = new MultiplexThrottleDeliveryMessageHandlerCallback(sub, 1, 3*numMsgs, false, numMsgs); Hedwig::MessageHandlerCallbackPtr handler11(cb11); Hedwig::MessageHandlerCallbackPtr handler12(cb12); Hedwig::MessageHandlerCallbackPtr handler21(cb21); Hedwig::MessageHandlerCallbackPtr handler22(cb22); Hedwig::SubscriptionOptions options; options.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); options.set_messagewindowsize(numMsgs); sub.subscribe(topic1, subid1, options); sub.subscribe(topic1, subid2, options); sub.subscribe(topic2, subid1, options); sub.subscribe(topic2, subid2, options); // start deliveries sub.startDelivery(topic1, subid1, handler11); sub.startDelivery(topic1, subid2, handler12); sub.startDelivery(topic2, subid1, handler21); sub.startDelivery(topic2, subid2, handler22); // first publish for (int i = 1; i <= 3*numMsgs; i++) { std::stringstream oss; oss << i; pub.publish(topic1, oss.str()); pub.publish(topic2, oss.str()); } // check first run cb11->checkThrottle(); cb12->checkThrottle(); cb21->checkThrottle(); cb22->checkThrottle(); // consume messages to not throttle them for (int i=1; i<=numMsgs; i++) { Hedwig::MessageSeqId msgid; msgid.set_localcomponent(i); sub.consume(topic1, subid2, msgid); sub.consume(topic2, subid1, msgid); } cb11->checkAfterThrottle(); cb12->checkAfterThrottle(); cb21->checkAfterThrottle(); cb22->checkAfterThrottle(); } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/publishtest.cpp000066400000000000000000000240101244507361200263370ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "gtest/gtest.h" #include "../lib/clientimpl.h" #include #include #include #include #include #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); TEST(PublishTest, testPublishByMessage) { Hedwig::Configuration* conf = new TestServerConfiguration(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); Hedwig::Message syncMsg; syncMsg.set_body("sync publish by Message"); pub.publish("testTopic", syncMsg); SimpleWaitCondition* cond = new SimpleWaitCondition(); Hedwig::OperationCallbackPtr testcb(new TestCallback(cond)); Hedwig::Message asyncMsg; asyncMsg.set_body("async publish by Message"); pub.asyncPublish("testTopic", asyncMsg, testcb); cond->wait(); ASSERT_TRUE(cond->wasSuccess()); delete cond; delete client; delete conf; } TEST(PublishTest, testSyncPublish) { Hedwig::Configuration* conf = new TestServerConfiguration(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); pub.publish("testTopic", "testMessage 1"); delete client; delete conf; } TEST(PublishTest, testSyncPublishWithResponse) { Hedwig::Configuration* conf = new TestServerConfiguration(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); int numMsgs = 20; for(int i=1; i<=numMsgs; i++) { Hedwig::PublishResponsePtr pubResponse = pub.publish("testSyncPublishWithResponse", "testMessage " + i); ASSERT_EQ(i, (int)pubResponse->publishedmsgid().localcomponent()); } delete client; delete conf; } TEST(PublishTest, testAsyncPublish) { SimpleWaitCondition* cond = new SimpleWaitCondition(); Hedwig::Configuration* conf = new TestServerConfiguration(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); Hedwig::OperationCallbackPtr testcb(new TestCallback(cond)); pub.asyncPublish("testTopic", "async test message", testcb); cond->wait(); ASSERT_TRUE(cond->wasSuccess()); delete cond; delete client; delete conf; } TEST(PublishTest, testAsyncPublishWithResponse) { Hedwig::Configuration* conf = new TestServerConfiguration(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); int numMsgs = 20; for (int i=1; i<=numMsgs; i++) { SimpleWaitCondition* cond = new SimpleWaitCondition(); TestPublishResponseCallback* callback = new TestPublishResponseCallback(cond); Hedwig::PublishResponseCallbackPtr testcb(callback); Hedwig::Message asyncMsg; asyncMsg.set_body("testAsyncPublishWithResponse-" + i); pub.asyncPublishWithResponse("testAsyncPublishWithResponse", asyncMsg, testcb); cond->wait(); ASSERT_TRUE(cond->wasSuccess()); ASSERT_EQ(i, (int)callback->getResponse()->publishedmsgid().localcomponent()); delete cond; } delete client; delete conf; } TEST(PublishTest, testMultipleAsyncPublish) { SimpleWaitCondition* cond1 = new SimpleWaitCondition(); SimpleWaitCondition* cond2 = new SimpleWaitCondition(); SimpleWaitCondition* cond3 = new SimpleWaitCondition(); Hedwig::Configuration* conf = new TestServerConfiguration(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); Hedwig::OperationCallbackPtr testcb1(new TestCallback(cond1)); Hedwig::OperationCallbackPtr testcb2(new TestCallback(cond2)); Hedwig::OperationCallbackPtr testcb3(new TestCallback(cond3)); pub.asyncPublish("testTopic", "async test message #1", testcb1); pub.asyncPublish("testTopic", "async test message #2", testcb2); pub.asyncPublish("testTopic", "async test message #3", testcb3); cond3->wait(); ASSERT_TRUE(cond3->wasSuccess()); cond2->wait(); ASSERT_TRUE(cond2->wasSuccess()); cond1->wait(); ASSERT_TRUE(cond1->wasSuccess()); delete cond3; delete cond2; delete cond1; delete client; delete conf; } class UnresolvedDefaultHostCallback : public Hedwig::OperationCallback { public: UnresolvedDefaultHostCallback(SimpleWaitCondition* cond) : cond(cond) {} virtual void operationComplete() { cond->setSuccess(false); cond->notify(); } virtual void operationFailed(const std::exception& exception) { LOG4CXX_ERROR(logger, "Failed with exception : " << exception.what()); cond->setSuccess(exception.what() == Hedwig::HostResolutionException().what()); cond->notify(); } private: SimpleWaitCondition *cond; }; TEST(PublishTest, testPublishWithUnresolvedDefaultHost) { std::string invalidHost(""); Hedwig::Configuration* conf = new TestServerConfiguration(invalidHost); SimpleWaitCondition* cond = new SimpleWaitCondition(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); Hedwig::OperationCallbackPtr testcb(new UnresolvedDefaultHostCallback(cond)); pub.asyncPublish("testTopic", "testPublishWithUnresolvedDefaultHost", testcb); cond->wait(); ASSERT_TRUE(cond->wasSuccess()); delete cond; delete client; delete conf; } /* void simplePublish() { LOG4CXX_DEBUG(logger, ">>> simplePublish"); SimpleWaitCondition* cond = new SimpleWaitCondition(); Hedwig::Configuration* conf = new Configuration1(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); Hedwig::OperationCallbackPtr testcb(new TestCallback(cond)); pub.asyncPublish("foobar", "barfoo", testcb); LOG4CXX_DEBUG(logger, "wait for response"); cond->wait(); delete cond; LOG4CXX_DEBUG(logger, "got response"); delete client; delete conf; LOG4CXX_DEBUG(logger, "<<< simplePublish"); } class MyMessageHandler : public Hedwig::MessageHandlerCallback { public: MyMessageHandler(SimpleWaitCondition* cond) : cond(cond) {} void consume(const std::string& topic, const std::string& subscriberId, const Hedwig::Message& msg, Hedwig::OperationCallbackPtr& callback) { LOG4CXX_DEBUG(logger, "Topic: " << topic << " subscriberId: " << subscriberId); LOG4CXX_DEBUG(logger, " Message: " << msg.body()); callback->operationComplete(); cond->setTrue(); cond->signal(); } private: SimpleWaitCondition* cond; };*/ /* void simplePublishAndSubscribe() { SimpleWaitCondition* cond1 = new SimpleWaitCondition(); SimpleWaitCondition* cond2 = new SimpleWaitCondition(); SimpleWaitCondition* cond3 = new SimpleWaitCondition(); Hedwig::Configuration* conf = new Configuration1(); Hedwig::Client* client = new Hedwig::Client(*conf); Hedwig::Publisher& pub = client->getPublisher(); Hedwig::Subscriber& sub = client->getSubscriber(); std::string topic("foobar"); std::string sid("mysubscriber"); Hedwig::OperationCallbackPtr testcb1(new TestCallback(cond1)); sub.asyncSubscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH, testcb1); Hedwig::MessageHandlerCallbackPtr messagecb(new MyMessageHandler(cond2)); sub.startDelivery(topic, sid, messagecb); cond1->wait(); Hedwig::OperationCallbackPtr testcb2(new TestCallback(cond3)); pub.asyncPublish("foobar", "barfoo", testcb2); cond3->wait(); cond2->wait(); delete cond1; delete cond3; delete cond2; delete client; delete conf; } void publishAndSubscribeWithRedirect() { SimpleWaitCondition* cond1 = new SimpleWaitCondition(); SimpleWaitCondition* cond2 = new SimpleWaitCondition(); SimpleWaitCondition* cond3 = new SimpleWaitCondition(); SimpleWaitCondition* cond4 = new SimpleWaitCondition(); Hedwig::Configuration* publishconf = new Configuration1(); Hedwig::Configuration* subscribeconf = new Configuration2(); Hedwig::Client* publishclient = new Hedwig::Client(*publishconf); Hedwig::Publisher& pub = publishclient->getPublisher(); Hedwig::Client* subscribeclient = new Hedwig::Client(*subscribeconf); Hedwig::Subscriber& sub = subscribeclient->getSubscriber(); LOG4CXX_DEBUG(logger, "publishing"); Hedwig::OperationCallbackPtr testcb2(new TestCallback(cond3)); pub.asyncPublish("foobar", "barfoo", testcb2); cond3->wait(); LOG4CXX_DEBUG(logger, "Subscribing"); std::string topic("foobar"); std::string sid("mysubscriber"); Hedwig::OperationCallbackPtr testcb1(new TestCallback(cond1)); sub.asyncSubscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH, testcb1); LOG4CXX_DEBUG(logger, "Starting delivery"); Hedwig::MessageHandlerCallbackPtr messagecb(new MyMessageHandler(cond2)); sub.startDelivery(topic, sid, messagecb); LOG4CXX_DEBUG(logger, "Subscribe wait"); cond1->wait(); Hedwig::OperationCallbackPtr testcb3(new TestCallback(cond4)); pub.asyncPublish("foobar", "barfoo", testcb3); cond4->wait(); LOG4CXX_DEBUG(logger, "Delivery wait"); cond2->wait(); sub.stopDelivery(topic, sid); delete cond1; delete cond3; delete cond2; delete cond4; delete subscribeclient; delete publishclient; delete publishconf; delete subscribeconf; }*/ bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/pubsubtest.cpp000066400000000000000000000554561244507361200262130ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include #include "gtest/gtest.h" #include #include "../lib/clientimpl.h" #include #include #include #include #include #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); class StartStopDeliveryMsgHandler : public Hedwig::MessageHandlerCallback { public: StartStopDeliveryMsgHandler(Hedwig::Subscriber& subscriber, const int nextValue) : subscriber(subscriber), nextValue(nextValue) {} virtual void consume(const std::string& topic, const std::string& subscriberId, const Hedwig::Message& msg, Hedwig::OperationCallbackPtr& callback) { { boost::lock_guard lock(mutex); int curVal = atoi(msg.body().c_str()); LOG4CXX_DEBUG(logger, "received message " << curVal); if (curVal == nextValue) { ++nextValue; } callback->operationComplete(); } ASSERT_THROW(subscriber.startDelivery(topic, subscriberId, Hedwig::MessageHandlerCallbackPtr()), Hedwig::StartingDeliveryException); ASSERT_THROW(subscriber.stopDelivery(topic, subscriberId), Hedwig::StartingDeliveryException); } int getNextValue() { return nextValue; } private: Hedwig::Subscriber& subscriber; boost::mutex mutex; int nextValue; }; class PubSubMessageHandlerCallback : public Hedwig::MessageHandlerCallback { public: PubSubMessageHandlerCallback(const std::string& topic, const std::string& subscriberId) : messagesReceived(0), topic(topic), subscriberId(subscriberId) { } virtual void consume(const std::string& topic, const std::string& subscriberId, const Hedwig::Message& msg, Hedwig::OperationCallbackPtr& callback) { if (topic == this->topic && subscriberId == this->subscriberId) { boost::lock_guard lock(mutex); messagesReceived++; lastMessage = msg.body(); callback->operationComplete(); } } std::string getLastMessage() { boost::lock_guard lock(mutex); std::string s = lastMessage; return s; } int numMessagesReceived() { boost::lock_guard lock(mutex); int i = messagesReceived; return i; } protected: boost::mutex mutex; int messagesReceived; std::string lastMessage; std::string topic; std::string subscriberId; }; // order checking callback class PubSubOrderCheckingMessageHandlerCallback : public Hedwig::MessageHandlerCallback { public: PubSubOrderCheckingMessageHandlerCallback(const std::string& topic, const std::string& subscriberId, const int startMsgId, const int sleepTimeInConsume) : topic(topic), subscriberId(subscriberId), startMsgId(startMsgId), nextMsgId(startMsgId), isInOrder(true), sleepTimeInConsume(sleepTimeInConsume) { } virtual void consume(const std::string& topic, const std::string& subscriberId, const Hedwig::Message& msg, Hedwig::OperationCallbackPtr& callback) { if (topic == this->topic && subscriberId == this->subscriberId) { boost::lock_guard lock(mutex); int newMsgId = atoi(msg.body().c_str()); if (newMsgId == nextMsgId + 1) { // only calculate unduplicated entries ++nextMsgId; } // checking msgId LOG4CXX_DEBUG(logger, "received message " << newMsgId); if (startMsgId >= 0) { // need to check ordering if start msg id is larger than 0 if (isInOrder) { // in some environments, ssl channel encountering error like Bad File Descriptor. // the channel would disconnect and reconnect. A duplicated message would be received. // so just checking we received a larger out-of-order message. if (newMsgId > startMsgId + 1) { LOG4CXX_ERROR(logger, "received out-of-order message : expected " << (startMsgId + 1) << ", actual " << newMsgId); isInOrder = false; } else { startMsgId = newMsgId; } } } else { // we set first msg id as startMsgId when startMsgId is -1 startMsgId = newMsgId; } callback->operationComplete(); sleep(sleepTimeInConsume); } } int nextExpectedMsgId() { boost::lock_guard lock(mutex); return nextMsgId; } bool inOrder() { boost::lock_guard lock(mutex); return isInOrder; } protected: boost::mutex mutex; std::string topic; std::string subscriberId; int startMsgId; int nextMsgId; bool isInOrder; int sleepTimeInConsume; }; // Publisher integer until finished class IntegerPublisher { public: IntegerPublisher(const std::string &topic, int startMsgId, int numMsgs, int sleepTime, Hedwig::Publisher &pub, long runTime) : topic(topic), startMsgId(startMsgId), numMsgs(numMsgs), sleepTime(sleepTime), pub(pub), running(true), runTime(runTime) { } void operator()() { int i = 1; long beginTime = curTime(); long elapsedTime = 0; while (running) { try { int msg = startMsgId + i; std::stringstream ss; ss << msg; pub.publish(topic, ss.str()); sleep(sleepTime); if (numMsgs > 0 && i >= numMsgs) { running = false; } else { if (i % 100 == 0 && (elapsedTime = (curTime() - beginTime)) >= runTime) { LOG4CXX_DEBUG(logger, "Elapsed time : " << elapsedTime); running = false; } } ++i; } catch (std::exception &e) { LOG4CXX_WARN(logger, "Exception when publishing messages : " << e.what()); } } } long curTime() { struct timeval tv; long mtime; gettimeofday(&tv, NULL); mtime = tv.tv_sec * 1000 + tv.tv_usec / 1000.0 + 0.5; return mtime; } private: std::string topic; int startMsgId; int numMsgs; int sleepTime; Hedwig::Publisher& pub; bool running; long runTime; }; TEST(PubSubTest, testStartDeliveryWithoutSub) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); std::string topic = "testStartDeliveryWithoutSub"; std::string sid = "mysub"; PubSubMessageHandlerCallback* cb = new PubSubMessageHandlerCallback(topic, sid); Hedwig::MessageHandlerCallbackPtr handler(cb); ASSERT_THROW(sub.startDelivery(topic, sid, handler), Hedwig::NotSubscribedException); } TEST(PubSubTest, testAlreadyStartDelivery) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); std::string topic = "testAlreadyStartDelivery"; std::string sid = "mysub"; sub.subscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); PubSubMessageHandlerCallback* cb = new PubSubMessageHandlerCallback(topic, sid); Hedwig::MessageHandlerCallbackPtr handler(cb); sub.startDelivery(topic, sid, handler); ASSERT_THROW(sub.startDelivery(topic, sid, handler), Hedwig::AlreadyStartDeliveryException); } TEST(PubSubTest, testStopDeliveryWithoutSub) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); ASSERT_THROW(sub.stopDelivery("testStopDeliveryWithoutSub", "mysub"), Hedwig::NotSubscribedException); } TEST(PubSubTest, testStopDeliveryTwice) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); std::string topic = "testStopDeliveryTwice"; std::string subid = "mysub"; sub.subscribe(topic, subid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); // it is ok to stop delivery without start delivery sub.stopDelivery(topic, subid); PubSubMessageHandlerCallback* cb = new PubSubMessageHandlerCallback(topic, subid); Hedwig::MessageHandlerCallbackPtr handler(cb); sub.startDelivery(topic, subid, handler); sub.stopDelivery(topic, subid); // stop again sub.stopDelivery(topic, subid); } // test startDelivery / stopDelivery in msg handler TEST(PubSubTest, testStartStopDeliveryInMsgHandler) { std::string topic("startStopDeliveryInMsgHandler"); std::string subscriber("mysubid"); Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); // subscribe topic sub.subscribe(topic, subscriber, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); int numMsgs = 5; for (int i=0; igetNextValue() == numMsgs) { break; } else { sleep(1); } } ASSERT_TRUE(cb->getNextValue() == numMsgs); sub.stopDelivery(topic, subscriber); sub.closeSubscription(topic, subscriber); } // test startDelivery / stopDelivery randomly TEST(PubSubTest, testRandomDelivery) { std::string topic = "randomDeliveryTopic"; std::string subscriber = "mysub-randomDelivery"; int nLoops = 300; int sleepTimePerLoop = 1; int syncTimeout = 10000; Hedwig::Configuration* conf = new TestServerConfiguration(syncTimeout); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); // subscribe topic sub.subscribe(topic, subscriber, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); // start thread to publish message IntegerPublisher intPublisher = IntegerPublisher(topic, 0, 0, 0, pub, nLoops * sleepTimePerLoop * 1000); boost::thread pubThread(intPublisher); // start random delivery PubSubOrderCheckingMessageHandlerCallback* cb = new PubSubOrderCheckingMessageHandlerCallback(topic, subscriber, 0, 0); Hedwig::MessageHandlerCallbackPtr handler(cb); for (int i = 0; i < nLoops; i++) { LOG4CXX_DEBUG(logger, "Randomly Delivery : " << i); sub.startDelivery(topic, subscriber, handler); // sleep random time usleep(rand()%1000000); sub.stopDelivery(topic, subscriber); ASSERT_TRUE(cb->inOrder()); } pubThread.join(); } // check message ordering TEST(PubSubTest, testPubSubOrderChecking) { std::string topic = "orderCheckingTopic"; std::string sid = "mysub-0"; int numMessages = 5; int sleepTimeInConsume = 1; // sync timeout int syncTimeout = 10000; // in order to guarantee message order, message queue should be locked // so message received in io thread would be blocked, which also block // sent operations (publish). because we have only one io thread now // so increase sync timeout to 10s, which is more than numMessages * sleepTimeInConsume Hedwig::Configuration* conf = new TestServerConfiguration(syncTimeout); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); sub.subscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); // we don't start delivery first, so the message will be queued // publish ${numMessages} messages, so the messages will be queued for (int i=1; i<=numMessages; i++) { std::stringstream ss; ss << i; pub.publish(topic, ss.str()); } PubSubOrderCheckingMessageHandlerCallback* cb = new PubSubOrderCheckingMessageHandlerCallback(topic, sid, 0, sleepTimeInConsume); Hedwig::MessageHandlerCallbackPtr handler(cb); // create a thread to publish another ${numMessages} messages boost::thread pubThread(IntegerPublisher(topic, numMessages, numMessages, sleepTimeInConsume, pub, 0)); // start delivery will consumed the queued messages // new message will recevied and the queued message should be consumed // hedwig should ensure the message are received in order sub.startDelivery(topic, sid, handler); // wait until message are all published pubThread.join(); for (int i = 0; i < 10; i++) { sleep(3); if (cb->nextExpectedMsgId() == 2 * numMessages) { break; } } ASSERT_TRUE(cb->inOrder()); } // check message ordering TEST(PubSubTest, testPubSubInMultiDispatchThreads) { std::string topic = "PubSubInMultiDispatchThreadsTopic-"; std::string sid = "mysub-0"; int syncTimeout = 10000; int numDispatchThreads = 4; int numMessages = 100; int numTopics = 20; Hedwig::Configuration* conf = new TestServerConfiguration(syncTimeout, numDispatchThreads); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); std::vector callbacks; for (int i=0; i > threads; for (int i=0; i t = boost::shared_ptr( new boost::thread(IntegerPublisher(ss.str(), 0, numMessages, 0, pub, 0))); threads.push_back(t); } for (int i=0; ijoin(); } threads.clear(); for (int j=0; jnextExpectedMsgId() == numMessages) { break; } sleep(3); } ASSERT_TRUE(cb->inOrder()); } callbacks.clear(); } TEST(PubSubTest, testPubSubContinuousOverClose) { std::string topic = "pubSubTopic"; std::string sid = "MySubscriberid-1"; Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); sub.subscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); PubSubMessageHandlerCallback* cb = new PubSubMessageHandlerCallback(topic, sid); Hedwig::MessageHandlerCallbackPtr handler(cb); sub.startDelivery(topic, sid, handler); pub.publish(topic, "Test Message 1"); bool pass = false; for (int i = 0; i < 10; i++) { sleep(3); if (cb->numMessagesReceived() > 0) { if (cb->getLastMessage() == "Test Message 1") { pass = true; break; } } } ASSERT_TRUE(pass); sub.closeSubscription(topic, sid); pub.publish(topic, "Test Message 2"); sub.subscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.startDelivery(topic, sid, handler); pass = false; for (int i = 0; i < 10; i++) { sleep(3); if (cb->numMessagesReceived() > 0) { if (cb->getLastMessage() == "Test Message 2") { pass = true; break; } } } ASSERT_TRUE(pass); } /* void testPubSubContinuousOverServerDown() { std::string topic = "pubSubTopic"; std::string sid = "MySubscriberid-1"; Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); sub.subscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); PubSubMessageHandlerCallback* cb = new PubSubMessageHandlerCallback(topic, sid); Hedwig::MessageHandlerCallbackPtr handler(cb); sub.startDelivery(topic, sid, handler); pub.publish(topic, "Test Message 1"); bool pass = false; for (int i = 0; i < 10; i++) { sleep(3); if (cb->numMessagesReceived() > 0) { if (cb->getLastMessage() == "Test Message 1") { pass = true; break; } } } CPPUNIT_ASSERT(pass); sub.closeSubscription(topic, sid); pub.publish(topic, "Test Message 2"); sub.subscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.startDelivery(topic, sid, handler); pass = false; for (int i = 0; i < 10; i++) { sleep(3); if (cb->numMessagesReceived() > 0) { if (cb->getLastMessage() == "Test Message 2") { pass = true; break; } } } CPPUNIT_ASSERT(pass); }*/ TEST(PubSubTest, testMultiTopic) { std::string topicA = "pubSubTopicA"; std::string topicB = "pubSubTopicB"; std::string sid = "MySubscriberid-3"; Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); sub.subscribe(topicA, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.subscribe(topicB, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); PubSubMessageHandlerCallback* cbA = new PubSubMessageHandlerCallback(topicA, sid); Hedwig::MessageHandlerCallbackPtr handlerA(cbA); sub.startDelivery(topicA, sid, handlerA); PubSubMessageHandlerCallback* cbB = new PubSubMessageHandlerCallback(topicB, sid); Hedwig::MessageHandlerCallbackPtr handlerB(cbB); sub.startDelivery(topicB, sid, handlerB); pub.publish(topicA, "Test Message A"); pub.publish(topicB, "Test Message B"); int passA = false, passB = false; for (int i = 0; i < 10; i++) { sleep(3); if (cbA->numMessagesReceived() > 0) { if (cbA->getLastMessage() == "Test Message A") { passA = true; } } if (cbB->numMessagesReceived() > 0) { if (cbB->getLastMessage() == "Test Message B") { passB = true; } } if (passA && passB) { break; } } ASSERT_TRUE(passA && passB); } TEST(PubSubTest, testMultiTopicMultiSubscriber) { std::string topicA = "pubSubTopicA"; std::string topicB = "pubSubTopicB"; std::string sidA = "MySubscriberid-4"; std::string sidB = "MySubscriberid-5"; Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); sub.subscribe(topicA, sidA, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.subscribe(topicB, sidB, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); PubSubMessageHandlerCallback* cbA = new PubSubMessageHandlerCallback(topicA, sidA); Hedwig::MessageHandlerCallbackPtr handlerA(cbA); sub.startDelivery(topicA, sidA, handlerA); PubSubMessageHandlerCallback* cbB = new PubSubMessageHandlerCallback(topicB, sidB); Hedwig::MessageHandlerCallbackPtr handlerB(cbB); sub.startDelivery(topicB, sidB, handlerB); pub.publish(topicA, "Test Message A"); pub.publish(topicB, "Test Message B"); int passA = false, passB = false; for (int i = 0; i < 10; i++) { sleep(3); if (cbA->numMessagesReceived() > 0) { if (cbA->getLastMessage() == "Test Message A") { passA = true; } } if (cbB->numMessagesReceived() > 0) { if (cbB->getLastMessage() == "Test Message B") { passB = true; } } if (passA && passB) { break; } } ASSERT_TRUE(passA && passB); } static const int BIG_MESSAGE_SIZE = 16436*2; // MTU to lo0 is 16436 by default on linux TEST(PubSubTest, testBigMessage) { std::string topic = "pubSubTopic"; std::string sid = "MySubscriberid-6"; Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); sub.subscribe(topic, sid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH); PubSubMessageHandlerCallback* cb = new PubSubMessageHandlerCallback(topic, sid); Hedwig::MessageHandlerCallbackPtr handler(cb); sub.startDelivery(topic, sid, handler); char buf[BIG_MESSAGE_SIZE]; std::string bigmessage(buf, BIG_MESSAGE_SIZE); pub.publish(topic, bigmessage); pub.publish(topic, "Test Message 1"); bool pass = false; for (int i = 0; i < 10; i++) { sleep(3); if (cb->numMessagesReceived() > 0) { if (cb->getLastMessage() == "Test Message 1") { pass = true; break; } } } ASSERT_TRUE(pass); } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/subscribetest.cpp000066400000000000000000000220461244507361200266610ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "gtest/gtest.h" #include "../lib/clientimpl.h" #include #include #include #include #include #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); TEST(SubscribeTest, testSyncSubscribe) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); sub.subscribe("testTopic", "mySubscriberId-1", Hedwig::SubscribeRequest::CREATE_OR_ATTACH); } TEST(SubscribeTest, testSyncSubscribeAttach) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); ASSERT_THROW(sub.subscribe("iAmATopicWhoDoesNotExist", "mySubscriberId-2", Hedwig::SubscribeRequest::ATTACH), Hedwig::ClientException); } TEST(SubscribeTest, testAsyncSubscribe) { SimpleWaitCondition* cond1 = new SimpleWaitCondition(); std::auto_ptr cond1ptr(cond1); Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::OperationCallbackPtr testcb1(new TestCallback(cond1)); sub.asyncSubscribe("testTopic", "mySubscriberId-3", Hedwig::SubscribeRequest::CREATE_OR_ATTACH, testcb1); cond1->wait(); ASSERT_TRUE(cond1->wasSuccess()); } TEST(SubscribeTest, testAsyncSubcribeAndUnsubscribe) { SimpleWaitCondition* cond1 = new SimpleWaitCondition(); std::auto_ptr cond1ptr(cond1); SimpleWaitCondition* cond2 = new SimpleWaitCondition(); std::auto_ptr cond2ptr(cond2); Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::OperationCallbackPtr testcb1(new TestCallback(cond1)); Hedwig::OperationCallbackPtr testcb2(new TestCallback(cond2)); sub.asyncSubscribe("testTopic", "mySubscriberId-4", Hedwig::SubscribeRequest::CREATE_OR_ATTACH, testcb1); cond1->wait(); ASSERT_TRUE(cond1->wasSuccess()); sub.asyncUnsubscribe("testTopic", "mySubscriberId-4", testcb2); cond2->wait(); ASSERT_TRUE(cond2->wasSuccess()); } TEST(SubscribeTest, testAsyncSubcribeAndSyncUnsubscribe) { SimpleWaitCondition* cond1 = new SimpleWaitCondition(); std::auto_ptr cond1ptr(cond1); Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::OperationCallbackPtr testcb1(new TestCallback(cond1)); sub.asyncSubscribe("testTopic", "mySubscriberId-5", Hedwig::SubscribeRequest::CREATE_OR_ATTACH, testcb1); cond1->wait(); ASSERT_TRUE(cond1->wasSuccess()); sub.unsubscribe("testTopic", "mySubscriberId-5"); } TEST(SubscribeTest, testAsyncSubcribeCloseSubscriptionAndThenResubscribe) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); sub.subscribe("testTopic", "mySubscriberId-6", Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.closeSubscription("testTopic", "mySubscriberId-6"); sub.subscribe("testTopic", "mySubscriberId-6", Hedwig::SubscribeRequest::CREATE_OR_ATTACH); sub.unsubscribe("testTopic", "mySubscriberId-6"); } TEST(SubscribeTest, testUnsubscribeWithoutSubscribe) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); ASSERT_THROW(sub.unsubscribe("testTopic", "mySubscriberId-7"), Hedwig::NotSubscribedException); } TEST(SubscribeTest, testAsyncSubscribeTwice) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); SimpleWaitCondition* cond1 = new SimpleWaitCondition(); std::auto_ptr cond1ptr(cond1); SimpleWaitCondition* cond2 = new SimpleWaitCondition(); std::auto_ptr cond2ptr(cond2); Hedwig::OperationCallbackPtr testcb1(new TestCallback(cond1)); Hedwig::OperationCallbackPtr testcb2(new TestCallback(cond2)); std::string topic("testAsyncSubscribeTwice"); std::string subid("mysubid"); sub.asyncSubscribe(topic, subid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH, testcb1); sub.asyncSubscribe(topic, subid, Hedwig::SubscribeRequest::CREATE_OR_ATTACH, testcb2); cond1->wait(); cond2->wait(); if (cond1->wasSuccess()) { ASSERT_TRUE(!cond2->wasSuccess()); } else { ASSERT_TRUE(cond2->wasSuccess()); } } TEST(SubscribeTest, testSubscribeTwice) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); sub.subscribe("testTopic", "mySubscriberId-8", Hedwig::SubscribeRequest::CREATE_OR_ATTACH); ASSERT_THROW(sub.subscribe("testTopic", "mySubscriberId-8", Hedwig::SubscribeRequest::CREATE_OR_ATTACH), Hedwig::AlreadySubscribedException); } TEST(SubscribeTest, testAsyncSubcribeForceAttach) { Hedwig::Configuration* conf = new TestServerConfiguration(); std::auto_ptr confptr(conf); // client 1 Hedwig::Client* client1 = new Hedwig::Client(*conf); std::auto_ptr client1ptr(client1); Hedwig::Subscriber& sub1 = client1->getSubscriber(); // client 2 Hedwig::Client* client2 = new Hedwig::Client(*conf); std::auto_ptr client2ptr(client2); Hedwig::Subscriber& sub2 = client2->getSubscriber(); SimpleWaitCondition* cond1 = new SimpleWaitCondition(); std::auto_ptr cond1ptr(cond1); Hedwig::OperationCallbackPtr testcb1(new TestCallback(cond1)); SimpleWaitCondition* lcond1 = new SimpleWaitCondition(); std::auto_ptr lcond1ptr(lcond1); Hedwig::SubscriptionListenerPtr listener1( new TestSubscriptionListener(lcond1, Hedwig::SUBSCRIPTION_FORCED_CLOSED)); Hedwig::SubscriptionOptions options; options.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); options.set_forceattach(true); options.set_enableresubscribe(false); sub1.addSubscriptionListener(listener1); sub1.asyncSubscribe("asyncSubscribeForceAttach", "mysub", options, testcb1); cond1->wait(); ASSERT_TRUE(cond1->wasSuccess()); // sub2 subscribe would force close the channel of sub1 SimpleWaitCondition* cond2 = new SimpleWaitCondition(); std::auto_ptr cond2ptr(cond2); Hedwig::OperationCallbackPtr testcb2(new TestCallback(cond2)); Hedwig::SubscriptionListenerPtr listener2( new TestSubscriptionListener(0, Hedwig::SUBSCRIPTION_FORCED_CLOSED)); sub2.addSubscriptionListener(listener2); sub2.asyncSubscribe("asyncSubscribeForceAttach", "mysub", options, testcb2); cond2->wait(); ASSERT_TRUE(cond2->wasSuccess()); // sub1 would receive the disconnect event lcond1->wait(); sub1.unsubscribe("asyncSubscribeForceAttach", "mysub"); } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/test.sh000066400000000000000000000034141244507361200246050ustar00rootroot00000000000000#!/bin/sh # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. export LD_LIBRARY_PATH=/usr/lib/jvm/java-6-sun/jre/lib/i386/server/:/usr/lib/jvm/java-6-sun/jre/lib/i386/ export CLASSPATH=$HOME/src/hedwig/server/target/test-classes:$HOME/src/hedwig/server/lib/bookkeeper-SNAPSHOT.jar:$HOME/src/hedwig/server/lib/zookeeper-SNAPSHOT.jar:$HOME/src/hedwig/server/target/classes:$HOME/src/hedwig/protocol/target/classes:$HOME/src/hedwig/client/target/classes:$HOME/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:$HOME/.m2/repository/org/jboss/netty/netty/3.1.2.GA/netty-3.1.2.GA.jar:$HOME/.m2/repository/commons-lang/commons-lang/2.4/commons-lang-2.4.jar:$HOME/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:$HOME/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:$HOME/.m2/repository/com/google/protobuf/protobuf-java/2.3.0/protobuf-java-2.3.0.jar:$HOME/.m2/repository/log4j/log4j/1.2.14/log4j-1.2.14.jar:$HOME/src/hedwig/client/target/classes/ ./hedwigtestbookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/throttledeliverytest.cpp000066400000000000000000000124411244507361200303070ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "gtest/gtest.h" #include "../lib/clientimpl.h" #include #include #include #include #include #include "util.h" static log4cxx::LoggerPtr logger(log4cxx::Logger::getLogger("hedwig."__FILE__)); class ThrottleDeliveryConfiguration : public TestServerConfiguration { public: ThrottleDeliveryConfiguration() : TestServerConfiguration() {} virtual bool getBool(const std::string& key, bool defaultVal) const { if (key == Configuration::SUBSCRIBER_AUTOCONSUME) { return false; } else { return TestServerConfiguration::getBool(key, defaultVal); } } }; class ThrottleDeliveryMessageHandlerCallback : public Hedwig::MessageHandlerCallback { public: ThrottleDeliveryMessageHandlerCallback(Hedwig::Subscriber& sub, const int start, const int end, const int expectedToThrottle, SimpleWaitCondition& throttleLatch, SimpleWaitCondition& nonThrottleLatch) : sub(sub), next(start), end(end), expectedToThrottle(expectedToThrottle), throttleLatch(throttleLatch), nonThrottleLatch(nonThrottleLatch) { } virtual void consume(const std::string& topic, const std::string& subscriberId, const Hedwig::Message& msg, Hedwig::OperationCallbackPtr& callback) { const int value = atoi(msg.body().c_str()); LOG4CXX_DEBUG(logger, "received message " << value); boost::lock_guard lock(mutex); if (value == next) { ++next; } else { LOG4CXX_ERROR(logger, "Did not receive expected value " << next << ", got " << value); next = 0; throttleLatch.setSuccess(false); throttleLatch.notify(); nonThrottleLatch.setSuccess(false); nonThrottleLatch.notify(); } if (next == expectedToThrottle + 2) { throttleLatch.setSuccess(true); throttleLatch.notify(); } else if (next == end + 1) { nonThrottleLatch.setSuccess(true); nonThrottleLatch.notify(); } callback->operationComplete(); if (next > expectedToThrottle + 1) { sub.consume(topic, subscriberId, msg.msgid()); } } int nextExpected() { boost::lock_guard lock(mutex); return next; } protected: Hedwig::Subscriber& sub; boost::mutex mutex; int next; const int end; const int expectedToThrottle; SimpleWaitCondition& throttleLatch; SimpleWaitCondition& nonThrottleLatch; }; void throttleX(Hedwig::Publisher& pub, Hedwig::Subscriber& sub, const std::string& topic, const std::string& subid, int X) { for (int i = 1; i <= 3*X; i++) { std::stringstream oss; oss << i; pub.publish(topic, oss.str()); } sub.subscribe(topic, subid, Hedwig::SubscribeRequest::ATTACH); SimpleWaitCondition throttleLatch, nonThrottleLatch; ThrottleDeliveryMessageHandlerCallback* cb = new ThrottleDeliveryMessageHandlerCallback(sub, 1, 3*X, X, throttleLatch, nonThrottleLatch); Hedwig::MessageHandlerCallbackPtr handler(cb); sub.startDelivery(topic, subid, handler); throttleLatch.timed_wait(3000); ASSERT_TRUE(!throttleLatch.wasSuccess()); ASSERT_EQ(X + 1, cb->nextExpected()); // consume messages to not throttle it for (int i=1; i<=X; i++) { Hedwig::MessageSeqId msgid; msgid.set_localcomponent(i); sub.consume(topic, subid, msgid); } nonThrottleLatch.timed_wait(10000); ASSERT_TRUE(nonThrottleLatch.wasSuccess()); ASSERT_EQ(3*X + 1, cb->nextExpected()); sub.stopDelivery(topic, subid); sub.closeSubscription(topic, subid); } TEST(ThrottleDeliveryTest, testThrottleDelivery) { Hedwig::Configuration* conf = new ThrottleDeliveryConfiguration(); std::auto_ptr confptr(conf); Hedwig::Client* client = new Hedwig::Client(*conf); std::auto_ptr clientptr(client); Hedwig::Subscriber& sub = client->getSubscriber(); Hedwig::Publisher& pub = client->getPublisher(); int throttleValue = 10; std::string topic = "testThrottleDelivery"; std::string subid = "testSubId"; Hedwig::SubscriptionOptions options; options.set_createorattach(Hedwig::SubscribeRequest::CREATE_OR_ATTACH); options.set_messagewindowsize(throttleValue); sub.subscribe(topic, subid, options); sub.closeSubscription(topic, subid); throttleX(pub, sub, topic, subid, throttleValue); } bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/util.h000066400000000000000000000131231244507361200244160ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "../lib/clientimpl.h" #include #include #include #include #include #include #include static log4cxx::LoggerPtr utillogger(log4cxx::Logger::getLogger("hedwig."__FILE__)); class SimpleWaitCondition { public: SimpleWaitCondition() : flag(false), success(false) {}; ~SimpleWaitCondition() {} void wait() { boost::unique_lock lock(mut); while(!flag) { cond.wait(lock); } } void timed_wait(uint64_t milliseconds) { boost::mutex::scoped_lock lock(mut); if (!flag) { LOG4CXX_DEBUG(utillogger, "wait for " << milliseconds << " ms."); if (!cond.timed_wait(lock, boost::posix_time::milliseconds(milliseconds))) { LOG4CXX_DEBUG(utillogger, "Timeout wait for " << milliseconds << " ms."); } } } void notify() { { boost::lock_guard lock(mut); flag = true; } cond.notify_all(); } void setSuccess(bool s) { success = s; } bool wasSuccess() { return success; } private: bool flag; boost::condition_variable cond; boost::mutex mut; bool success; }; class TestPublishResponseCallback : public Hedwig::PublishResponseCallback { public: TestPublishResponseCallback(SimpleWaitCondition* cond) : cond(cond) { } virtual void operationComplete(const Hedwig::PublishResponsePtr & resp) { LOG4CXX_DEBUG(utillogger, "operationComplete"); pubResp = resp; cond->setSuccess(true); cond->notify(); } virtual void operationFailed(const std::exception& exception) { LOG4CXX_DEBUG(utillogger, "operationFailed: " << exception.what()); cond->setSuccess(false); cond->notify(); } Hedwig::PublishResponsePtr getResponse() { return pubResp; } private: SimpleWaitCondition *cond; Hedwig::PublishResponsePtr pubResp; }; class TestCallback : public Hedwig::OperationCallback { public: TestCallback(SimpleWaitCondition* cond) : cond(cond) { } virtual void operationComplete() { LOG4CXX_DEBUG(utillogger, "operationComplete"); cond->setSuccess(true); cond->notify(); } virtual void operationFailed(const std::exception& exception) { LOG4CXX_DEBUG(utillogger, "operationFailed: " << exception.what()); cond->setSuccess(false); cond->notify(); } private: SimpleWaitCondition *cond; }; class TestSubscriptionListener : public Hedwig::SubscriptionListener { public: TestSubscriptionListener(SimpleWaitCondition* cond, const Hedwig::SubscriptionEvent event) : cond(cond), expectedEvent(event) { LOG4CXX_DEBUG(utillogger, "Created TestSubscriptionListener " << this); } virtual ~TestSubscriptionListener() {} virtual void processEvent(const std::string& topic, const std::string& subscriberId, const Hedwig::SubscriptionEvent event) { LOG4CXX_DEBUG(utillogger, "Received event " << event << " for (topic:" << topic << ", subscriber:" << subscriberId << ") from listener " << this); if (expectedEvent == event) { if (cond) { cond->setSuccess(true); cond->notify(); } } } private: SimpleWaitCondition *cond; const Hedwig::SubscriptionEvent expectedEvent; }; class TestServerConfiguration : public Hedwig::Configuration { public: TestServerConfiguration() : address("localhost:4081:9877"), syncTimeout(10000), numThreads(2) {} TestServerConfiguration(std::string& defaultServer) : address(defaultServer), syncTimeout(10000), numThreads(2) {} TestServerConfiguration(int syncTimeout, int numThreads = 2) : address("localhost:4081:9877"), syncTimeout(syncTimeout), numThreads(numThreads) {} virtual int getInt(const std::string& key, int defaultVal) const { if (key == Configuration::SYNC_REQUEST_TIMEOUT) { return syncTimeout; } else if (key == Configuration::NUM_DISPATCH_THREADS) { return numThreads; } return defaultVal; } virtual const std::string get(const std::string& key, const std::string& defaultVal) const { if (key == Configuration::DEFAULT_SERVER) { return address; } else if (key == Configuration::SSL_PEM_FILE) { return certFile; } else { return defaultVal; } } virtual bool getBool(const std::string& key, bool defaultVal) const { if (key == Configuration::SSL_ENABLED) { return isSSL; } else if (key == Configuration::SUBSCRIPTION_CHANNEL_SHARING_ENABLED) { return multiplexing; } return defaultVal; } public: // for testing static bool isSSL; static std::string certFile; static bool multiplexing; private: const std::string address; const int syncTimeout; const int numThreads; }; bookkeeper-release-4.2.4/hedwig-client/src/main/cpp/test/utiltest.cpp000066400000000000000000000051641244507361200256570ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef HAVE_CONFIG_H #include #endif #include "gtest/gtest.h" #include "../lib/util.h" #include #include TEST(UtilTest, testHostAddress) { // good address (no ports) Hedwig::HostAddress a1 = Hedwig::HostAddress::fromString("www.yahoo.com"); ASSERT_TRUE(a1.port() == 4080); // good address with ip (no ports) Hedwig::HostAddress a2 = Hedwig::HostAddress::fromString("127.0.0.1"); ASSERT_TRUE(a2.port() == 4080); ASSERT_TRUE(a2.ip() == ((127 << 24) | 1)); // good address Hedwig::HostAddress a3 = Hedwig::HostAddress::fromString("www.yahoo.com:80"); ASSERT_TRUE(a3.port() == 80); // good address with ip Hedwig::HostAddress a4 = Hedwig::HostAddress::fromString("127.0.0.1:80"); ASSERT_TRUE(a4.port() == 80); ASSERT_TRUE(a4.ip() == ((127 << 24) | 1)); // good address (with ssl) Hedwig::HostAddress a5 = Hedwig::HostAddress::fromString("www.yahoo.com:80:443"); ASSERT_TRUE(a5.port() == 80); // good address with ip Hedwig::HostAddress a6 = Hedwig::HostAddress::fromString("127.0.0.1:80:443"); ASSERT_TRUE(a6.port() == 80); ASSERT_TRUE(a6.ip() == ((127 << 24) | 1)); // nothing ASSERT_THROW(Hedwig::HostAddress::fromString(""), Hedwig::HostResolutionException); // nothing but colons ASSERT_THROW(Hedwig::HostAddress::fromString("::::::::::::::::"), Hedwig::ConfigurationException); // only port number ASSERT_THROW(Hedwig::HostAddress::fromString(":80"), Hedwig::HostResolutionException); // text after colon (isn't supported) ASSERT_THROW(Hedwig::HostAddress::fromString("www.yahoo.com:http"), Hedwig::ConfigurationException); // invalid hostname ASSERT_THROW(Hedwig::HostAddress::fromString("com.oohay.www:80"), Hedwig::HostResolutionException); // null ASSERT_THROW(Hedwig::HostAddress::fromString(NULL), std::logic_error); } bookkeeper-release-4.2.4/hedwig-client/src/main/java/000077500000000000000000000000001244507361200224505ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/000077500000000000000000000000001244507361200232375ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/000077500000000000000000000000001244507361200244605ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/000077500000000000000000000000001244507361200257275ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/000077500000000000000000000000001244507361200272055ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/HedwigClient.java000066400000000000000000000044601244507361200324220ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client; import org.apache.hedwig.client.api.Client; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.netty.HedwigClientImpl; import org.apache.hedwig.client.conf.ClientConfiguration; import org.jboss.netty.channel.ChannelFactory; /** * Hedwig client uses as starting point for all communications with the Hedwig service. * * @see Publisher * @see Subscriber */ public class HedwigClient implements Client { private final Client impl; /** * Construct a hedwig client object. The configuration object * should be an instance of a class which implements ClientConfiguration. * * @param cfg The client configuration. */ public HedwigClient(ClientConfiguration cfg) { impl = HedwigClientImpl.create(cfg); } /** * Construct a hedwig client object, using a preexisting socket factory. * This is useful if you need to create many hedwig client instances. * * @param cfg The client configuration * @param socketFactory A netty socket factory. */ public HedwigClient(ClientConfiguration cfg, ChannelFactory socketFactory) { impl = HedwigClientImpl.create(cfg, socketFactory); } @Override public Publisher getPublisher() { return impl.getPublisher(); } @Override public Subscriber getSubscriber() { return impl.getSubscriber(); } @Override public void close() { impl.close(); } }bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/api/000077500000000000000000000000001244507361200277565ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/api/Client.java000066400000000000000000000026371244507361200320470ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.api; /** * Interface defining the client API for Hedwig */ public interface Client { /** * Retrieve the Publisher object for the client. * This object can be used to publish messages to a topic on Hedwig. * @see Publisher */ public Publisher getPublisher(); /** * Retrieve the Subscriber object for the client. * This object can be used to subscribe for messages from a topic. * @see Subscriber */ public Subscriber getSubscriber(); /** * Close the client and free all associated resources. */ public void close(); }MessageHandler.java000066400000000000000000000033671244507361200334350ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/api/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.api; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.util.Callback; /** * Interface to define the client handler logic to deliver messages it is * subscribed to. * */ public interface MessageHandler { /** * Delivers a message which has been published for topic. * * @param topic * The topic name where the message came from. * @param subscriberId * ID of the subscriber. * @param msg * The message object to deliver. * @param callback * Callback to invoke when the message delivery has been done. * @param context * Calling context that the Callback needs since this is done * asynchronously. */ public void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context); } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/api/Publisher.java000066400000000000000000000070571244507361200325670ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.api; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException.CouldNotConnectException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.util.Callback; /** * Interface to define the client Publisher API. * */ public interface Publisher { /** * Publishes a message on the given topic. * * @param topic * Topic name to publish on * @param msg * Message object to serialize and publish * @throws CouldNotConnectException * If we are not able to connect to the server host * @throws ServiceDownException * If we are unable to publish the message to the topic. * @return The PubSubProtocol.PublishResponse of the publish ... can be used to pick seq-id. */ public PubSubProtocol.PublishResponse publish(ByteString topic, Message msg) throws CouldNotConnectException, ServiceDownException; /** * Publishes a message asynchronously on the given topic. * * @param topic * Topic name to publish on * @param msg * Message object to serialize and publish * @param callback * Callback to invoke when the publish to the server has actually * gone through. This will have to deal with error conditions on * the async publish request. * @param context * Calling context that the Callback needs since this is done * asynchronously. */ public void asyncPublish(ByteString topic, Message msg, Callback callback, Object context); /** * Publishes a message asynchronously on the given topic. * This method, unlike {@link #asyncPublish(ByteString, PubSubProtocol.Message, Callback, Object)}, * allows for the callback to retrieve {@link org.apache.hedwig.protocol.PubSubProtocol.PublishResponse} * which was returned by the server. * * * * @param topic * Topic name to publish on * @param msg * Message object to serialize and publish * @param callback * Callback to invoke when the publish to the server has actually * gone through. This will have to deal with error conditions on * the async publish request. * @param context * Calling context that the Callback needs since this is done * asynchronously. */ public void asyncPublishWithResponse(ByteString topic, Message msg, Callback callback, Object context); } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/api/Subscriber.java000066400000000000000000000421041244507361200327250ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.api; import java.util.List; import com.google.protobuf.ByteString; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.exceptions.InvalidSubscriberIdException; import org.apache.hedwig.exceptions.PubSubException.ClientAlreadySubscribedException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.CouldNotConnectException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.filter.ClientMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.SubscriptionListener; /** * Interface to define the client Subscriber API. * */ public interface Subscriber { /** * Subscribe to the given topic for the inputted subscriberId. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param mode * Whether to prohibit, tolerate, or require an existing * subscription. * @throws CouldNotConnectException * If we are not able to connect to the server host * @throws ClientAlreadySubscribedException * If client is already subscribed to the topic * @throws ServiceDownException * If unable to subscribe to topic * @throws InvalidSubscriberIdException * If the subscriberId is not valid. We may want to set aside * certain formats of subscriberId's for different purposes. * e.g. local vs. hub subscriber * @deprecated As of BookKeeper 4.2.0, replaced by * {@link Subscriber#subscribe(com.google.protobuf.ByteString, * com.google.protobuf.ByteString, * PubSubProtocol.SubscriptionOptions)} */ @Deprecated public void subscribe(ByteString topic, ByteString subscriberId, CreateOrAttach mode) throws CouldNotConnectException, ClientAlreadySubscribedException, ServiceDownException, InvalidSubscriberIdException; /** * Subscribe to the given topic asynchronously for the inputted subscriberId * disregarding if the topic has been created yet or not. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param mode * Whether to prohibit, tolerate, or require an existing * subscription. * @param callback * Callback to invoke when the subscribe request to the server * has actually gone through. This will have to deal with error * conditions on the async subscribe request. * @param context * Calling context that the Callback needs since this is done * asynchronously. * @deprecated As of BookKeeper 4.2.0, replaced by * {@link Subscriber#asyncSubscribe(com.google.protobuf.ByteString, * com.google.protobuf.ByteString, * PubSubProtocol.SubscriptionOptions,Callback,Object)} */ @Deprecated public void asyncSubscribe(ByteString topic, ByteString subscriberId, CreateOrAttach mode, Callback callback, Object context); /** * Subscribe to the given topic for the inputted subscriberId. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param options * Options to pass to the subscription. See * {@link Subscriber#asyncSubscribe(com.google.protobuf.ByteString, * com.google.protobuf.ByteString, * PubSubProtocol.SubscriptionOptions, * Callback,Object) asyncSubscribe} * for details on how to set options. * @throws CouldNotConnectException * If we are not able to connect to the server host * @throws ClientAlreadySubscribedException * If client is already subscribed to the topic * @throws ServiceDownException * If unable to subscribe to topic * @throws InvalidSubscriberIdException * If the subscriberId is not valid. We may want to set aside * certain formats of subscriberId's for different purposes. * e.g. local vs. hub subscriber */ public void subscribe(ByteString topic, ByteString subscriberId, SubscriptionOptions options) throws CouldNotConnectException, ClientAlreadySubscribedException, ServiceDownException, InvalidSubscriberIdException; /** *

Subscribe to the given topic asynchronously for the inputted subscriberId.

* *

SubscriptionOptions contains parameters for how the hub should make the subscription. * The options includes createorattach mode, message bound and message filter.

* *

The createorattach mode defines whether the subscription should create a new subscription, or * just attach to a preexisting subscription. If it tries to create the subscription, and the * subscription already exists, then an error will occur.

* *

The message bound defines the maximum number of undelivered messages which will be stored * for the subscription. This can be used to ensure that unused subscriptions do not grow * in an unbounded fashion. By default, the message bound is infinite, i.e. all undelivered messages * will be stored for the subscription. Note that if one subscription on a topic has a infinite * message bound, the message bound for all other subscriptions on that topic will effectively be * infinite as the messages have to be stored for the first subscription in any case.

* *

The message filter defines a {@link org.apache.hedwig.filter.ServerMessageFilter} * run in hub server to filter messages delivered to the subscription. The server message * filter should be placed in the classpath of hub server before using it.

* * All these subscription options would be stored as SubscriptionPreferences in metadata * manager. The next time subscriber attached with difference options, the new options would * overwrite the old options. * * Usage is as follows: *
     * {@code
     * // create a new subscription with a message bound of 5
     * SubscriptionOptions options = SubscriptionOptions.newBuilder()
     *     .setCreateOrAttach(CreateOrAttach.CREATE).setMessageBound(5).build();
     * client.getSubscriber().asyncSubscribe(ByteString.copyFromUtf8("myTopic"),
     *                                       ByteString.copyFromUtf8("mySubscription"),
     *                                       options,
     *                                       myCallback,
     *                                       myContext);
     * }
     * 
* @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param options * Options to pass to the subscription. * @param callback * Callback to invoke when the subscribe request to the server * has actually gone through. This will have to deal with error * conditions on the async subscribe request. * @param context * Calling context that the Callback needs since this is done * asynchronously. */ public void asyncSubscribe(ByteString topic, ByteString subscriberId, SubscriptionOptions options, Callback callback, Object context); /** * Unsubscribe from a topic that the subscriberId user has previously * subscribed to. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @throws CouldNotConnectException * If we are not able to connect to the server host * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic * @throws ServiceDownException * If the server was down and unable to complete the request * @throws InvalidSubscriberIdException * If the subscriberId is not valid. We may want to set aside * certain formats of subscriberId's for different purposes. * e.g. local vs. hub subscriber */ public void unsubscribe(ByteString topic, ByteString subscriberId) throws CouldNotConnectException, ClientNotSubscribedException, ServiceDownException, InvalidSubscriberIdException; /** * Unsubscribe from a topic asynchronously that the subscriberId user has * previously subscribed to. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param callback * Callback to invoke when the unsubscribe request to the server * has actually gone through. This will have to deal with error * conditions on the async unsubscribe request. * @param context * Calling context that the Callback needs since this is done * asynchronously. */ public void asyncUnsubscribe(ByteString topic, ByteString subscriberId, Callback callback, Object context); /** * Manually send a consume message to the server for the given inputs. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param messageSeqId * Message Sequence ID for the latest message that the client app * has successfully consumed. All messages up to that point will * also be considered as consumed. * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic based * on the client's local state. */ public void consume(ByteString topic, ByteString subscriberId, MessageSeqId messageSeqId) throws ClientNotSubscribedException; /** * Checks if the subscriberId client is currently subscribed to the given * topic. * * @param topic * Topic name of the subscription. * @param subscriberId * ID of the subscriber * @throws CouldNotConnectException * If we are not able to connect to the server host * @throws ServiceDownException * If there is an error checking the server if the client has a * subscription * @return Boolean indicating if the client has a subscription or not. */ public boolean hasSubscription(ByteString topic, ByteString subscriberId) throws CouldNotConnectException, ServiceDownException; /** * Fills the input List with the subscriptions this subscriberId client is * subscribed to. * * @param subscriberId * ID of the subscriber * @return List filled with subscription name (topic) strings. * @throws CouldNotConnectException * If we are not able to connect to the server host * @throws ServiceDownException * If there is an error retrieving the list of topics */ public List getSubscriptionList(ByteString subscriberId) throws CouldNotConnectException, ServiceDownException; /** * Begin delivery of messages from the server to us for this topic and * subscriberId. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param messageHandler * Message Handler that will consume the subscribed messages * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic * @throws AlreadyStartDeliveryException * If someone started delivery a message handler before stopping existed one. */ public void startDelivery(ByteString topic, ByteString subscriberId, MessageHandler messageHandler) throws ClientNotSubscribedException, AlreadyStartDeliveryException; /** * Begin delivery of messages from the server to us for this topic and * subscriberId. * * Only the messages passed messageFilter could be delivered to * messageHandler. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param messageHandler * Message Handler that will consume the subscribed messages * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic * @throws AlreadyStartDeliveryException * If someone started delivery a message handler before stopping existed one. * @throws NullPointerException * If either messageHandler or messageFilter is null. */ public void startDeliveryWithFilter(ByteString topic, ByteString subscriberId, MessageHandler messageHandler, ClientMessageFilter messageFilter) throws ClientNotSubscribedException, AlreadyStartDeliveryException; /** * Stop delivery of messages for this topic and subscriberId. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic */ public void stopDelivery(ByteString topic, ByteString subscriberId) throws ClientNotSubscribedException; /** * Closes all of the client side cached data for this subscription without * actually sending an unsubscribe request to the server. This will close * the subscribe channel synchronously (if it exists) for the topic. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @throws ServiceDownException * If the subscribe channel was not able to be closed * successfully */ public void closeSubscription(ByteString topic, ByteString subscriberId) throws ServiceDownException; /** * Closes all of the client side cached data for this subscription without * actually sending an unsubscribe request to the server. This will close * the subscribe channel asynchronously (if it exists) for the topic. * * @param topic * Topic name of the subscription * @param subscriberId * ID of the subscriber * @param callback * Callback to invoke when the subscribe channel has been closed. * @param context * Calling context that the Callback needs since this is done * asynchronously. */ public void asyncCloseSubscription(ByteString topic, ByteString subscriberId, Callback callback, Object context); /** * Register a subscription listener which get notified about subscription * event indicating a state of a subscription that subscribed disable * resubscribe logic. * * @param listener * Subscription Listener */ public void addSubscriptionListener(SubscriptionListener listener); /** * Unregister a subscription listener. * * @param listener * Subscription Listener */ public void removeSubscriptionListener(SubscriptionListener listener); } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/benchmark/000077500000000000000000000000001244507361200311375ustar00rootroot00000000000000BenchmarkPublisher.java000066400000000000000000000126311244507361200354760ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/benchmark/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.benchmark; import com.google.protobuf.ByteString; import org.apache.bookkeeper.util.MathUtils; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.benchmark.BenchmarkUtils.BenchmarkCallback; import org.apache.hedwig.client.benchmark.BenchmarkUtils.ThroughputLatencyAggregator; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.util.Callback; public class BenchmarkPublisher extends BenchmarkWorker { Publisher publisher; Subscriber subscriber; int msgSize; int nParallel; double rate; public BenchmarkPublisher(int numTopics, int numMessages, int numRegions, int startTopicLabel, int partitionIndex, int numPartitions, Publisher publisher, Subscriber subscriber, int msgSize, int nParallel, int rate) { super(numTopics, numMessages, numRegions, startTopicLabel, partitionIndex, numPartitions); this.publisher = publisher; this.msgSize = msgSize; this.subscriber = subscriber; this.nParallel = nParallel; this.rate = rate / (numRegions * numPartitions + 0.0); } public void warmup(int nWarmup) throws Exception { ByteString topic = ByteString.copyFromUtf8("warmup" + partitionIndex); ByteString subId = ByteString.copyFromUtf8("sub"); subscriber.subscribe(topic, subId, CreateOrAttach.CREATE_OR_ATTACH); subscriber.startDelivery(topic, subId, new MessageHandler() { @Override public void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { // noop callback.operationFinished(context, null); } }); // picking constants arbitarily for warmup phase ThroughputLatencyAggregator agg = new ThroughputLatencyAggregator("acked pubs", nWarmup, 100); agg.startProgress(); Message msg = getMsg(1024); for (int i = 0; i < nWarmup; i++) { publisher.asyncPublish(topic, msg, new BenchmarkCallback(agg), null); } if (agg.tpAgg.queue.take() > 0) { throw new RuntimeException("Warmup publishes failed!"); } } public Message getMsg(int size) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < size; i++) { sb.append('a'); } final ByteString body = ByteString.copyFromUtf8(sb.toString()); Message msg = Message.newBuilder().setBody(body).build(); return msg; } public Void call() throws Exception { Message msg = getMsg(msgSize); // Single warmup for every topic int myPublishCount = 0; for (int i = 0; i < numTopics; i++) { if (!HedwigBenchmark.amIResponsibleForTopic(startTopicLabel + i, partitionIndex, numPartitions)) { continue; } ByteString topic = ByteString.copyFromUtf8(HedwigBenchmark.TOPIC_PREFIX + (startTopicLabel + i)); publisher.publish(topic, msg); myPublishCount++; } long startTime = MathUtils.now(); int myPublishLimit = numMessages / numRegions / numPartitions - myPublishCount; myPublishCount = 0; ThroughputLatencyAggregator agg = new ThroughputLatencyAggregator("acked pubs", myPublishLimit, nParallel); agg.startProgress(); int topicLabel = 0; while (myPublishCount < myPublishLimit) { int topicNum = startTopicLabel + topicLabel; topicLabel = (topicLabel + 1) % numTopics; if (!HedwigBenchmark.amIResponsibleForTopic(topicNum, partitionIndex, numPartitions)) { continue; } ByteString topic = ByteString.copyFromUtf8(HedwigBenchmark.TOPIC_PREFIX + topicNum); if (rate > 0) { long delay = startTime + (long) (1000 * myPublishCount / rate) - MathUtils.now(); if (delay > 0) Thread.sleep(delay); } publisher.asyncPublish(topic, msg, new BenchmarkCallback(agg), null); myPublishCount++; } System.out.println("Finished unacked pubs: tput = " + BenchmarkUtils.calcTp(myPublishLimit, startTime) + " ops/s"); // Wait till the benchmark test has completed agg.tpAgg.queue.take(); System.out.println(agg.summarize(startTime)); return null; } } BenchmarkSubscriber.java000066400000000000000000000135021244507361200356420ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/benchmark/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.benchmark; import java.util.HashMap; import java.util.Map; import java.util.concurrent.Callable; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import org.apache.bookkeeper.util.MathUtils; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.benchmark.BenchmarkUtils.BenchmarkCallback; import org.apache.hedwig.client.benchmark.BenchmarkUtils.ThroughputAggregator; import org.apache.hedwig.client.benchmark.BenchmarkUtils.ThroughputLatencyAggregator; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.RegionSpecificSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.util.Callback; public class BenchmarkSubscriber extends BenchmarkWorker implements Callable { static final Logger logger = LoggerFactory.getLogger(BenchmarkSubscriber.class); Subscriber subscriber; ByteString subId; public BenchmarkSubscriber(int numTopics, int numMessages, int numRegions, int startTopicLabel, int partitionIndex, int numPartitions, Subscriber subscriber, ByteString subId) { super(numTopics, numMessages, numRegions, startTopicLabel, partitionIndex, numPartitions); this.subscriber = subscriber; this.subId = subId; } public void warmup(int numWarmup) throws InterruptedException { /* * multiplying the number of ops by numParitions because we end up * skipping many because of the partitioning logic */ multiSub("warmup", "warmup", 0, numWarmup, numWarmup * numPartitions); } public Void call() throws Exception { final ThroughputAggregator agg = new ThroughputAggregator("recvs", numMessages); agg.startProgress(); final Map lastSeqIdSeenMap = new HashMap(); for (int i = startTopicLabel; i < startTopicLabel + numTopics; i++) { if (!HedwigBenchmark.amIResponsibleForTopic(i, partitionIndex, numPartitions)) { continue; } final String topic = HedwigBenchmark.TOPIC_PREFIX + i; subscriber.subscribe(ByteString.copyFromUtf8(topic), subId, CreateOrAttach.CREATE_OR_ATTACH); subscriber.startDelivery(ByteString.copyFromUtf8(topic), subId, new MessageHandler() { @Override public void deliver(ByteString thisTopic, ByteString subscriberId, Message msg, Callback callback, Object context) { logger.debug("Got message from src-region: {} with seq-id: {}", msg.getSrcRegion(), msg.getMsgId()); String mapKey = topic + msg.getSrcRegion().toStringUtf8(); Long lastSeqIdSeen = lastSeqIdSeenMap.get(mapKey); if (lastSeqIdSeen == null) { lastSeqIdSeen = (long) 0; } if (getSrcSeqId(msg) <= lastSeqIdSeen) { logger.info("Redelivery of message, src-region: " + msg.getSrcRegion() + "seq-id: " + msg.getMsgId()); } else { agg.ding(false); } callback.operationFinished(context, null); } }); } System.out.println("Finished subscribing to topics and now waiting for messages to come in..."); // Wait till the benchmark test has completed agg.queue.take(); System.out.println(agg.summarize(agg.earliest.get())); return null; } long getSrcSeqId(Message msg) { if (msg.getMsgId().getRemoteComponentsCount() == 0) { return msg.getMsgId().getLocalComponent(); } for (RegionSpecificSeqId rseqId : msg.getMsgId().getRemoteComponentsList()) { if (rseqId.getRegion().equals(msg.getSrcRegion())) return rseqId.getSeqId(); } return msg.getMsgId().getLocalComponent(); } void multiSub(String label, String topicPrefix, int start, final int npar, final int count) throws InterruptedException { long startTime = MathUtils.now(); ThroughputLatencyAggregator agg = new ThroughputLatencyAggregator(label, count / numPartitions, npar); agg.startProgress(); int end = start + count; for (int i = start; i < end; ++i) { if (!HedwigBenchmark.amIResponsibleForTopic(i, partitionIndex, numPartitions)) { continue; } subscriber.asyncSubscribe(ByteString.copyFromUtf8(topicPrefix + i), subId, CreateOrAttach.CREATE_OR_ATTACH, new BenchmarkCallback(agg), null); } // Wait till the benchmark test has completed agg.tpAgg.queue.take(); if (count > 1) System.out.println(agg.summarize(startTime)); } } BenchmarkUtils.java000066400000000000000000000155651244507361200346520ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/benchmark/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.benchmark; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.Semaphore; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.bookkeeper.util.MathUtils; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.util.Callback; public class BenchmarkUtils { static final Logger logger = LoggerFactory.getLogger(BenchmarkUtils.class); public static double calcTp(final int count, long startTime) { return 1000. * count / (MathUtils.now() - startTime); } /** * Stats aggregator for callback (round-trip) operations. Measures both * throughput and latency. */ public static class ThroughputLatencyAggregator { int numBuckets; final ThroughputAggregator tpAgg; final Semaphore outstanding; final AtomicLong sum = new AtomicLong(); final AtomicLong[] latencyBuckets; // bucket[i] is count of number of operations that took >= i ms and < // (i+1) ms. public ThroughputLatencyAggregator(String label, int count, int limit) throws InterruptedException { numBuckets = Integer.getInteger("numBuckets", 101); latencyBuckets = new AtomicLong[numBuckets]; tpAgg = new ThroughputAggregator(label, count); outstanding = new Semaphore(limit); for (int i = 0; i < numBuckets; i++) { latencyBuckets[i] = new AtomicLong(); } } public void startProgress() { tpAgg.startProgress(); } public void reportLatency(long latency) { sum.addAndGet(latency); int bucketIndex; if (latency >= numBuckets) { bucketIndex = (int) numBuckets - 1; } else { bucketIndex = (int) latency; } latencyBuckets[bucketIndex].incrementAndGet(); } private String getPercentile(double percentile) { int numInliersNeeded = (int) (percentile / 100 * tpAgg.count); int numInliersFound = 0; for (int i = 0; i < numBuckets - 1; i++) { numInliersFound += latencyBuckets[i].intValue(); if (numInliersFound > numInliersNeeded) { return i + ""; } } return " >= " + (numBuckets - 1); } public String summarize(long startTime) { double percentile = Double.parseDouble(System.getProperty("percentile", "99.9")); return tpAgg.summarize(startTime) + ", avg latency = " + sum.get() / tpAgg.count + ", " + percentile + "%ile latency = " + getPercentile(percentile); } } /** * Stats aggregator for non-callback (single-shot) operations. Measures just * throughput. */ public static class ThroughputAggregator { final String label; final int count; final AtomicInteger done = new AtomicInteger(); final AtomicLong earliest = new AtomicLong(); final AtomicInteger numFailed = new AtomicInteger(); final Thread progressThread; final LinkedBlockingQueue queue = new LinkedBlockingQueue(); public ThroughputAggregator(final String label, final int count) { this.label = label; this.count = count; if (count == 0) queue.add(0); if (Boolean.getBoolean("progress")) { progressThread = new Thread(new Runnable() { @Override public void run() { try { for (int doneSnap = 0, prev = 0; doneSnap < count; prev = doneSnap, doneSnap = done.get()) { if (doneSnap > prev) { System.out.println(label + " progress: " + doneSnap + " of " + count); } Thread.sleep(1000); } } catch (Exception ex) { throw new RuntimeException(ex); } } }); } else { progressThread = null; } } public void startProgress() { if (progressThread != null) { progressThread.start(); } } public void ding(boolean failed) { int snapDone = done.incrementAndGet(); earliest.compareAndSet(0, MathUtils.now()); if (failed) numFailed.incrementAndGet(); if (logger.isDebugEnabled()) logger.debug(label + " " + (failed ? "failed" : "succeeded") + ", done so far = " + snapDone); if (snapDone == count) { queue.add(numFailed.get()); } } public String summarize(long startTime) { return "Finished " + label + ": count = " + done.get() + ", tput = " + calcTp(count, startTime) + " ops/s, numFailed = " + numFailed; } } public static class BenchmarkCallback implements Callback { final ThroughputLatencyAggregator agg; final long startTime; public BenchmarkCallback(ThroughputLatencyAggregator agg) throws InterruptedException { this.agg = agg; agg.outstanding.acquire(); // Must set the start time *after* taking acquiring on outstanding. startTime = MathUtils.now(); } private void finish(boolean failed) { agg.reportLatency(MathUtils.now() - startTime); agg.tpAgg.ding(failed); agg.outstanding.release(); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { finish(false); } @Override public void operationFailed(Object ctx, PubSubException exception) { finish(true); } }; } BenchmarkWorker.java000066400000000000000000000033601244507361200350110ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/benchmark/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.benchmark; public class BenchmarkWorker { int numTopics; int numMessages; int numRegions; int startTopicLabel; int partitionIndex; int numPartitions; public BenchmarkWorker(int numTopics, int numMessages, int numRegions, int startTopicLabel, int partitionIndex, int numPartitions) { this.numTopics = numTopics; this.numMessages = numMessages; this.numRegions = numRegions; this.startTopicLabel = startTopicLabel; this.partitionIndex = partitionIndex; this.numPartitions = numPartitions; if (numMessages % (numTopics * numRegions) != 0) { throw new RuntimeException("Number of messages not equally divisible among regions and topics"); } if (numTopics % numPartitions != 0) { throw new RuntimeException("Number of topics not equally divisible among partitions"); } } } HedwigBenchmark.java000066400000000000000000000155221244507361200347520ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/benchmark/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.benchmark; import java.io.File; import java.util.concurrent.Callable; import org.apache.commons.configuration.ConfigurationException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.logging.InternalLoggerFactory; import org.jboss.netty.logging.Log4JLoggerFactory; import com.google.protobuf.ByteString; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.Option; import org.apache.commons.cli.Options; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.CommandLineParser; import org.apache.commons.cli.PosixParser; import org.apache.commons.cli.ParseException; public class HedwigBenchmark implements Callable { protected static final Logger logger = LoggerFactory.getLogger(HedwigBenchmark.class); static final String TOPIC_PREFIX = "topic"; private final HedwigClient client; private final Publisher publisher; private final Subscriber subscriber; private final CommandLine cmd; public HedwigBenchmark(ClientConfiguration cfg, CommandLine cmd) { client = new HedwigClient(cfg); publisher = client.getPublisher(); subscriber = client.getSubscriber(); this.cmd = cmd; } static boolean amIResponsibleForTopic(int topicNum, int partitionIndex, int numPartitions) { return topicNum % numPartitions == partitionIndex; } @Override public Void call() throws Exception { // // Parameters. // // What program to run: pub, sub (subscription benchmark), recv. final String mode = cmd.getOptionValue("mode",""); // Number of requests to make (publishes or subscribes). int numTopics = Integer.valueOf(cmd.getOptionValue("nTopics", "50")); int numMessages = Integer.valueOf(cmd.getOptionValue("nMsgs", "1000")); int numRegions = Integer.valueOf(cmd.getOptionValue("nRegions", "1")); int startTopicLabel = Integer.valueOf(cmd.getOptionValue("startTopicLabel", "0")); int partitionIndex = Integer.valueOf(cmd.getOptionValue("partitionIndex", "0")); int numPartitions = Integer.valueOf(cmd.getOptionValue("nPartitions", "1")); int replicaIndex = Integer.valueOf(cmd.getOptionValue("replicaIndex", "0")); int rate = Integer.valueOf(cmd.getOptionValue("rate", "0")); int nParallel = Integer.valueOf(cmd.getOptionValue("npar", "100")); int msgSize = Integer.valueOf(cmd.getOptionValue("msgSize", "1024")); // Number of warmup subscriptions to make. final int nWarmups = Integer.valueOf(cmd.getOptionValue("nwarmups", "1000")); if (mode.equals("sub")) { BenchmarkSubscriber benchmarkSub = new BenchmarkSubscriber(numTopics, 0, 1, startTopicLabel, 0, 1, subscriber, ByteString.copyFromUtf8("mySub")); benchmarkSub.warmup(nWarmups); benchmarkSub.call(); } else if (mode.equals("recv")) { BenchmarkSubscriber benchmarkSub = new BenchmarkSubscriber(numTopics, numMessages, numRegions, startTopicLabel, partitionIndex, numPartitions, subscriber, ByteString.copyFromUtf8("sub-" + replicaIndex)); benchmarkSub.call(); } else if (mode.equals("pub")) { // Offered load in msgs/second. BenchmarkPublisher benchmarkPub = new BenchmarkPublisher(numTopics, numMessages, numRegions, startTopicLabel, partitionIndex, numPartitions, publisher, subscriber, msgSize, nParallel, rate); benchmarkPub.warmup(nWarmups); benchmarkPub.call(); } else { throw new Exception("unknown mode: " + mode); } return null; } public static void main(String[] args) throws Exception { Options options = new Options(); options.addOption("mode", true, "sub, recv, or pub"); options.addOption("nTopics", true, "Number of topics, default 50"); options.addOption("nMsgs", true, "Number of messages, default 1000"); options.addOption("nRegions", true, "Number of regsions, default 1"); options.addOption("startTopicLabel", true, "Prefix of topic labels. Must be numeric. Default 0"); options.addOption("partitionIndex", true, "If partitioning, the partition index for this client"); options.addOption("nPartitions", true, "Number of partitions, default 1"); options.addOption("replicaIndex", true, "default 0"); options.addOption("rate", true, "default 0"); options.addOption("npar", true, "default 100"); options.addOption("msgSize", true, "Size of messages, default 1024"); options.addOption("nwarmups", true, "Number of warmup messages, default 1000"); options.addOption("defaultHub", true, "Default hedwig hub to connect to, default localhost:4080"); CommandLineParser parser = new PosixParser(); final CommandLine cmd = parser.parse(options, args); if (cmd.hasOption("help")) { HelpFormatter formatter = new HelpFormatter(); formatter.printHelp("HedwigBenchmark ", options); System.exit(-1); } ClientConfiguration cfg = new ClientConfiguration() { public HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return new HedwigSocketAddress(cmd.getOptionValue("defaultHub", "localhost:4080")); } public boolean isSSLEnabled() { return false; } }; InternalLoggerFactory.setDefaultFactory(new Log4JLoggerFactory()); HedwigBenchmark app = new HedwigBenchmark(cfg, cmd); app.call(); System.exit(0); } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/conf/000077500000000000000000000000001244507361200301325ustar00rootroot00000000000000ClientConfiguration.java000066400000000000000000000201161244507361200346640ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/conf/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.conf; import java.net.InetSocketAddress; import org.apache.commons.configuration.ConfigurationException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hedwig.conf.AbstractConfiguration; import org.apache.hedwig.util.HedwigSocketAddress; public class ClientConfiguration extends AbstractConfiguration { Logger logger = LoggerFactory.getLogger(ClientConfiguration.class); // Protected member variables for configuration parameter names protected static final String DEFAULT_SERVER_HOST = "default_server_host"; protected static final String MAX_MESSAGE_SIZE = "max_message_size"; protected static final String MAX_SERVER_REDIRECTS = "max_server_redirects"; protected static final String AUTO_SEND_CONSUME_MESSAGE_ENABLED = "auto_send_consume_message_enabled"; protected static final String CONSUMED_MESSAGES_BUFFER_SIZE = "consumed_messages_buffer_size"; protected static final String MESSAGE_CONSUME_RETRY_WAIT_TIME = "message_consume_retry_wait_time"; protected static final String SUBSCRIBE_RECONNECT_RETRY_WAIT_TIME = "subscribe_reconnect_retry_wait_time"; protected static final String MAX_OUTSTANDING_MESSAGES = "max_outstanding_messages"; protected static final String SERVER_ACK_RESPONSE_TIMEOUT = "server_ack_response_timeout"; protected static final String TIMEOUT_THREAD_RUN_INTERVAL = "timeout_thread_run_interval"; protected static final String SSL_ENABLED = "ssl_enabled"; protected static final String SUBSCRIPTION_MESSAGE_BOUND = "subscription_message_bound"; protected static final String SUBSCRIPTION_CHANNEL_SHARING_ENABLED = "subscription_channel_sharing_enabled"; // Singletons we want to instantiate only once per ClientConfiguration protected HedwigSocketAddress myDefaultServerAddress = null; // Getters for the various Client Configuration parameters. // This should point to the default server host, or the VIP fronting all of // the server hubs. This will return the HedwigSocketAddress which // encapsulates both the regular and SSL port connection to the server host. protected HedwigSocketAddress getDefaultServerHedwigSocketAddress() { if (myDefaultServerAddress == null) myDefaultServerAddress = new HedwigSocketAddress(conf.getString(DEFAULT_SERVER_HOST, "localhost:4080:9876")); return myDefaultServerAddress; } // This will get the default server InetSocketAddress based on if SSL is // enabled or not. public InetSocketAddress getDefaultServerHost() { if (isSSLEnabled()) return getDefaultServerHedwigSocketAddress().getSSLSocketAddress(); else return getDefaultServerHedwigSocketAddress().getSocketAddress(); } public int getMaximumMessageSize() { return conf.getInt(MAX_MESSAGE_SIZE, 2 * 1024 * 1024); } // This parameter is for setting the maximum number of server redirects to // allow before we consider it as an error condition. This is to stop // infinite redirect loops in case there is a problem with the hub servers // topic mastership. public int getMaximumServerRedirects() { return conf.getInt(MAX_SERVER_REDIRECTS, 2); } // This parameter is a boolean flag indicating if the client library should // automatically send the consume message to the server based on the // configured amount of messages consumed by the client app. The client app // could choose to override this behavior and instead, manually send the // consume message to the server via the client library using its own // logic and policy. public boolean isAutoSendConsumeMessageEnabled() { return conf.getBoolean(AUTO_SEND_CONSUME_MESSAGE_ENABLED, true); } // This parameter is to set how many consumed messages we'll buffer up // before we send the Consume message to the server indicating that all // of the messages up to that point have been successfully consumed by // the client. public int getConsumedMessagesBufferSize() { return conf.getInt(CONSUMED_MESSAGES_BUFFER_SIZE, 5); } // This parameter is used to determine how long we wait before retrying the // client app's MessageHandler to consume a subscribed messages sent to us // from the server. The time to wait is in milliseconds. public long getMessageConsumeRetryWaitTime() { return conf.getLong(MESSAGE_CONSUME_RETRY_WAIT_TIME, 10000); } // This parameter is used to determine how long we wait before retrying the // Subscribe Reconnect request. This is done when the connection to a server // disconnects and we attempt to connect to it. We'll keep on trying but // in case the server(s) is down for a longer time, we want to throttle // how often we do the subscribe reconnect request. The time to wait is in // milliseconds. public long getSubscribeReconnectRetryWaitTime() { return conf.getLong(SUBSCRIBE_RECONNECT_RETRY_WAIT_TIME, 10000); } // This parameter is for setting the maximum number of outstanding messages // the client app can be consuming at a time for topic subscription before // we throttle things and stop reading from the Netty Channel. public int getMaximumOutstandingMessages() { return conf.getInt(MAX_OUTSTANDING_MESSAGES, 10); } // This parameter is used to determine how long we wait (in milliseconds) // before we time out outstanding PubSubRequests that were written to the // server successfully but haven't yet received the ack response. public long getServerAckResponseTimeout() { return conf.getLong(SERVER_ACK_RESPONSE_TIMEOUT, 30000); } // This parameter is used to determine how often we run the server ack // response timeout cleaner thread (in milliseconds). public long getTimeoutThreadRunInterval() { return conf.getLong(TIMEOUT_THREAD_RUN_INTERVAL, 60000); } // This parameter is a boolean flag indicating if communication with the // server should be done via SSL for encryption. This is needed for // cross-colo hub clients listening to non-local servers. public boolean isSSLEnabled() { return conf.getBoolean(SSL_ENABLED, false); } /** * This parameter is a boolean flag indicating if multiplexing subscription * channels. */ public boolean isSubscriptionChannelSharingEnabled() { return conf.getBoolean(SUBSCRIPTION_CHANNEL_SHARING_ENABLED, false); } /** * The maximum number of messages the hub will queue for subscriptions * created using this configuration. The hub will always queue the most * recent messages. If there are enough publishes to the topic to hit * the bound, then the oldest messages are dropped from the queue. * * A bound of 0 disabled the bound completely. This is the default. */ public int getSubscriptionMessageBound() { return conf.getInt(SUBSCRIPTION_MESSAGE_BOUND, 0); } // Validate that the configuration properties are valid. public void validate() throws ConfigurationException { if (isSSLEnabled() && getDefaultServerHedwigSocketAddress().getSSLSocketAddress() == null) { throw new ConfigurationException("SSL is enabled but a default server SSL port not given!"); } // Add other validation checks here } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/data/000077500000000000000000000000001244507361200301165ustar00rootroot00000000000000MessageConsumeData.java000066400000000000000000000040351244507361200344140ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/data/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.data; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.protocol.PubSubProtocol.Message; /** * Wrapper class to store all of the data points needed to encapsulate Message * Consumption in the Subscribe flow for consuming a message sent from the * server for a given TopicSubscriber. This will be used as the Context in the * VoidCallback for the MessageHandlers once they've completed consuming the * message. * */ public class MessageConsumeData { // Member variables public final TopicSubscriber topicSubscriber; // This is the Message sent from the server for Subscribes for consumption // by the client. public final Message msg; // Constructor public MessageConsumeData(final TopicSubscriber topicSubscriber, final Message msg) { this.topicSubscriber = topicSubscriber; this.msg = msg; } @Override public String toString() { StringBuilder sb = new StringBuilder(); if (topicSubscriber != null) { sb.append("Subscription: ").append(topicSubscriber); } if (msg != null) { sb.append(PubSubData.COMMA).append("Message: ").append(msg); } return sb.toString(); } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/data/PubSubData.java000066400000000000000000000171621244507361200327620ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.data; import java.util.List; import com.google.protobuf.ByteString; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.util.Callback; /** * Wrapper class to store all of the data points needed to encapsulate all * PubSub type of request operations the client will do. This includes knowing * all of the information needed if we need to redo the publish/subscribe * request in case of a server redirect. This will be used for all sync/async * calls, and for all the known types of request messages to send to the server * hubs: Publish, Subscribe, Unsubscribe, and Consume. * */ public class PubSubData { // Static string constants protected static final String COMMA = ", "; // Member variables needed during object construction time. public final ByteString topic; public final Message msg; public final ByteString subscriberId; // Enum to indicate what type of operation this PubSub request data object // is for. public final OperationType operationType; // Options for the subscription public final SubscriptionOptions options; // These two variables are not final since we might override them // in the case of a Subscribe reconnect. private Callback callback; public Object context; // Member variables used after object has been constructed. // List of all servers we've sent the PubSubRequest to successfully. // This is to keep track of redirected servers that responded back to us. public List triedServers; // List of all servers that we've tried to connect or write to but // was unsuccessful. We'll retry sending the PubSubRequest but will // quit if we're trying to connect or write to a server that we've // attempted to previously. public List connectFailedServers; public List writeFailedServers; // Boolean to the hub server indicating if it should claim ownership // of the topic the PubSubRequest is for. This is mainly used after // a server redirect. Defaults to false. public boolean shouldClaim = false; // TxnID for the PubSubData if it was sent as a PubSubRequest to the hub // server. This is used in the WriteCallback in case of failure. We want // to remove it from the ResponseHandler.txn2PubSubData map since the // failed PubSubRequest will not get an ack response from the server. // This is set later in the PubSub flows only when we write the actual // request. Therefore it is not an argument in the constructor. public long txnId; // Time in milliseconds using the System.currentTimeMillis() call when the // PubSubRequest was written on the netty Channel to the server. public long requestWriteTime; // For synchronous calls, this variable is used to know when the background // async process for it has completed, set in the VoidCallback. public boolean isDone = false; // Record the original channel for a resubscribe request private HChannel origChannel = null; // Constructor for all types of PubSub request data to send to the server public PubSubData(final ByteString topic, final Message msg, final ByteString subscriberId, final OperationType operationType, final SubscriptionOptions options, final Callback callback, final Object context) { this.topic = topic; this.msg = msg; this.subscriberId = subscriberId; this.operationType = operationType; this.options = options; this.callback = callback; this.context = context; } public void setCallback(Callback callback) { this.callback = callback; } public Callback getCallback() { return callback; } public void operationFinishedToCallback(Object context, PubSubProtocol.ResponseBody response){ callback.operationFinished(context, response); } public boolean isResubscribeRequest() { return null != origChannel; } public HChannel getOriginalChannelForResubscribe() { return origChannel; } public void setOriginalChannelForResubscribe(HChannel channel) { this.origChannel = channel; } // Clear all of the stored servers we've contacted or attempted to in this // request. public void clearServersList() { if (triedServers != null) triedServers.clear(); if (connectFailedServers != null) connectFailedServers.clear(); if (writeFailedServers != null) writeFailedServers.clear(); } @Override public String toString() { StringBuilder sb = new StringBuilder(); if (topic != null) sb.append("Topic: " + topic.toStringUtf8()); if (msg != null) sb.append(COMMA).append("Message: " + msg); if (subscriberId != null) sb.append(COMMA).append("SubscriberId: " + subscriberId.toStringUtf8()); if (operationType != null) sb.append(COMMA).append("Operation Type: " + operationType.toString()); if (options != null) sb.append(COMMA).append("Create Or Attach: " + options.getCreateOrAttach().toString()) .append(COMMA).append("Message Bound: " + options.getMessageBound()); if (triedServers != null && triedServers.size() > 0) { sb.append(COMMA).append("Tried Servers: "); for (ByteString triedServer : triedServers) { sb.append(triedServer.toStringUtf8()).append(COMMA); } } if (connectFailedServers != null && connectFailedServers.size() > 0) { sb.append(COMMA).append("Connect Failed Servers: "); for (ByteString connectFailedServer : connectFailedServers) { sb.append(connectFailedServer.toStringUtf8()).append(COMMA); } } if (writeFailedServers != null && writeFailedServers.size() > 0) { sb.append(COMMA).append("Write Failed Servers: "); for (ByteString writeFailedServer : writeFailedServers) { sb.append(writeFailedServer.toStringUtf8()).append(COMMA); } } sb.append(COMMA).append("Should Claim: " + shouldClaim); if (txnId != 0) sb.append(COMMA).append("TxnID: " + txnId); if (requestWriteTime != 0) sb.append(COMMA).append("Request Write Time: " + requestWriteTime); sb.append(COMMA).append("Is Done: " + isDone); return sb.toString(); } } TopicSubscriber.java000066400000000000000000000046071244507361200340130ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/data/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.data; import org.apache.commons.lang.builder.HashCodeBuilder; import com.google.protobuf.ByteString; /** * Wrapper class object for the Topic + SubscriberId combination. Since the * Subscribe flows always use the Topic + SubscriberId as the logical entity, * we'll create a simple class to encapsulate that. * */ public class TopicSubscriber { private final ByteString topic; private final ByteString subscriberId; private final int hashCode; public TopicSubscriber(final ByteString topic, final ByteString subscriberId) { this.topic = topic; this.subscriberId = subscriberId; hashCode = new HashCodeBuilder().append(topic).append(subscriberId).toHashCode(); } @Override public boolean equals(final Object o) { if (o == this) return true; if (!(o instanceof TopicSubscriber)) return false; final TopicSubscriber obj = (TopicSubscriber) o; return topic.equals(obj.topic) && subscriberId.equals(obj.subscriberId); } @Override public int hashCode() { return hashCode; } @Override public String toString() { StringBuilder sb = new StringBuilder(); if (topic != null) sb.append("Topic: " + topic.toStringUtf8()); if (subscriberId != null) sb.append(PubSubData.COMMA).append("SubscriberId: " + subscriberId.toStringUtf8()); return sb.toString(); } public ByteString getTopic() { return topic; } public ByteString getSubscriberId() { return subscriberId; } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/exceptions/000077500000000000000000000000001244507361200313665ustar00rootroot00000000000000AlreadyStartDeliveryException.java000066400000000000000000000025001244507361200401310ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/exceptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.exceptions; /** * This is a Hedwig client side exception when the local client wants to * startDelivery using another message handler before stopping previous one. */ public class AlreadyStartDeliveryException extends Exception { private static final long serialVersionUID = 873259807218723524L; public AlreadyStartDeliveryException(String message) { super(message); } public AlreadyStartDeliveryException(String message, Throwable t) { super(message, t); } } InvalidSubscriberIdException.java000066400000000000000000000026021244507361200377200ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/exceptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.exceptions; /** * This is a Hedwig client side exception when the local client wants to do * subscribe type of operations. Currently, to distinguish between local and hub * subscribers, the subscriberId will have a specific format. */ public class InvalidSubscriberIdException extends Exception { private static final long serialVersionUID = 873259807218723523L; public InvalidSubscriberIdException(String message) { super(message); } public InvalidSubscriberIdException(String message, Throwable t) { super(message, t); } } NoResponseHandlerException.java000066400000000000000000000024441244507361200374260ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/exceptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.exceptions; /** * This is a Hedwig client side exception thrown when it can't get the response * handler from the channel pipeline responsible for a PubSubRequest. */ public class NoResponseHandlerException extends Exception { private static final long serialVersionUID = 1L; public NoResponseHandlerException(String message) { super(message); } public NoResponseHandlerException(String message, Throwable t) { super(message, t); } } ResubscribeException.java000066400000000000000000000023071244507361200363030ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/exceptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.exceptions; /** * This is a Hedwig client side exception when the client failed to resubscribe * when topic moved or subscription is closed. */ public class ResubscribeException extends Exception { public ResubscribeException(String message) { super(message); } public ResubscribeException(String message, Throwable t) { super(message, t); } } ServerRedirectLoopException.java000066400000000000000000000027041244507361200376160ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/exceptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.exceptions; /** * This is a Hedwig client side exception when the PubSubRequest is being * redirected to a server where the request has already been sent to previously. * To avoid having a cyclical redirect loop, this condition is checked for * and this exception will be thrown to the client caller. */ public class ServerRedirectLoopException extends Exception { private static final long serialVersionUID = 98723508723152897L; public ServerRedirectLoopException(String message) { super(message); } public ServerRedirectLoopException(String message, Throwable t) { super(message, t); } } TooManyServerRedirectsException.java000066400000000000000000000027451244507361200404630ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/exceptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.exceptions; /** * This is a Hedwig client side exception when there have been too many server * redirects during a publish/subscribe call. We only allow a certain number of * server redirects to find the topic master. If we have exceeded this * configured amount, the publish/subscribe will fail with this exception. * */ public class TooManyServerRedirectsException extends Exception { private static final long serialVersionUID = 2341192937965635310L; public TooManyServerRedirectsException(String message) { super(message); } public TooManyServerRedirectsException(String message, Throwable t) { super(message, t); } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/handlers/000077500000000000000000000000001244507361200310055ustar00rootroot00000000000000AbstractResponseHandler.java000066400000000000000000000161141244507361200363540ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.handlers; import java.net.InetSocketAddress; import java.util.LinkedList; import com.google.protobuf.ByteString; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.client.exceptions.ServerRedirectLoopException; import org.apache.hedwig.client.exceptions.TooManyServerRedirectsException; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.util.HedwigSocketAddress; import static org.apache.hedwig.util.VarArgs.va; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public abstract class AbstractResponseHandler { private static Logger logger = LoggerFactory.getLogger(AbstractResponseHandler.class); protected final ClientConfiguration cfg; protected final HChannelManager channelManager; protected AbstractResponseHandler(ClientConfiguration cfg, HChannelManager channelManager) { this.cfg = cfg; this.channelManager = channelManager; } /** * Logic to handle received response. * * @param response * PubSubResponse received from hub server. * @param pubSubData * PubSubData for the pub/sub request. * @param channel * Channel we used to make the request. */ public abstract void handleResponse(PubSubResponse response, PubSubData pubSubData, Channel channel) throws Exception; /** * Logic to repost a PubSubRequest when the server responds with a redirect * indicating they are not the topic master. * * @param response * PubSubResponse from the server for the redirect * @param pubSubData * PubSubData for the original PubSubRequest made * @param channel * Channel Channel we used to make the original PubSubRequest * @throws Exception * Throws an exception if there was an error in doing the * redirect repost of the PubSubRequest */ protected void handleRedirectResponse(PubSubResponse response, PubSubData pubSubData, Channel channel) throws Exception { if (logger.isDebugEnabled()) { logger.debug("Handling a redirect from host: {}, response: {}, pubSubData: {}", va(NetUtils.getHostFromChannel(channel), response, pubSubData)); } // In this case, the PubSub request was done to a server that is not // responsible for the topic. First make sure that we haven't // exceeded the maximum number of server redirects. int curNumServerRedirects = (pubSubData.triedServers == null) ? 0 : pubSubData.triedServers.size(); if (curNumServerRedirects >= cfg.getMaximumServerRedirects()) { // We've already exceeded the maximum number of server redirects // so consider this as an error condition for the client. // Invoke the operationFailed callback and just return. logger.debug("Exceeded the number of server redirects ({}) so error out.", curNumServerRedirects); PubSubException exception = new ServiceDownException( new TooManyServerRedirectsException("Already reached max number of redirects: " + curNumServerRedirects)); pubSubData.getCallback().operationFailed(pubSubData.context, exception); return; } // We will redirect and try to connect to the correct server // stored in the StatusMsg of the response. First store the // server that we sent the PubSub request to for the topic. ByteString triedServer = ByteString.copyFromUtf8(HedwigSocketAddress.sockAddrStr( NetUtils.getHostFromChannel(channel))); if (pubSubData.triedServers == null) { pubSubData.triedServers = new LinkedList(); } pubSubData.shouldClaim = true; pubSubData.triedServers.add(triedServer); // Now get the redirected server host (expected format is // Hostname:Port:SSLPort) from the server's response message. If one is // not given for some reason, then redirect to the default server // host/VIP to repost the request. String statusMsg = response.getStatusMsg(); InetSocketAddress redirectedHost; boolean redirectToDefaultServer; if (statusMsg != null && statusMsg.length() > 0) { if (cfg.isSSLEnabled()) { redirectedHost = new HedwigSocketAddress(statusMsg).getSSLSocketAddress(); } else { redirectedHost = new HedwigSocketAddress(statusMsg).getSocketAddress(); } redirectToDefaultServer = false; } else { redirectedHost = cfg.getDefaultServerHost(); redirectToDefaultServer = true; } // Make sure the redirected server is not one we've already attempted // already before in this PubSub request. if (pubSubData.triedServers.contains(ByteString.copyFromUtf8(HedwigSocketAddress.sockAddrStr(redirectedHost)))) { logger.error("We've already sent this PubSubRequest before to redirectedHost: {}, pubSubData: {}", va(redirectedHost, pubSubData)); PubSubException exception = new ServiceDownException( new ServerRedirectLoopException("Already made the request before to redirected host: " + redirectedHost)); pubSubData.getCallback().operationFailed(pubSubData.context, exception); return; } // submit the pub/sub request to redirected host if (redirectToDefaultServer) { channelManager.submitOpToDefaultServer(pubSubData); } else { channelManager.redirectToHost(pubSubData, redirectedHost); } } } CloseSubscriptionResponseHandler.java000066400000000000000000000077221244507361200402700ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.handlers; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; public class CloseSubscriptionResponseHandler extends AbstractResponseHandler { private static Logger logger = LoggerFactory.getLogger(CloseSubscriptionResponseHandler.class); public CloseSubscriptionResponseHandler(ClientConfiguration cfg, HChannelManager channelManager) { super(cfg, channelManager); } @Override public void handleResponse(final PubSubResponse response, final PubSubData pubSubData, final Channel channel) throws Exception { switch (response.getStatusCode()) { case SUCCESS: pubSubData.getCallback().operationFinished(pubSubData.context, null); break; case CLIENT_NOT_SUBSCRIBED: // For closesubscription requests, the server says that the client was // never subscribed to the topic. pubSubData.getCallback().operationFailed(pubSubData.context, new ClientNotSubscribedException( "Client was never subscribed to topic: " + pubSubData.topic.toStringUtf8() + ", subscriberId: " + pubSubData.subscriberId.toStringUtf8())); break; case SERVICE_DOWN: // Response was service down failure so just invoke the callback's // operationFailed method. pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException( "Server responded with a SERVICE_DOWN status")); break; case NOT_RESPONSIBLE_FOR_TOPIC: // Redirect response so we'll need to repost the original // Unsubscribe Request handleRedirectResponse(response, pubSubData, channel); break; default: // Consider all other status codes as errors, operation failed // cases. logger.error("Unexpected error response from server for PubSubResponse: " + response); pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException( "Server responded with a status code of: " + response.getStatusCode())); break; } } } MessageConsumeCallback.java000066400000000000000000000125401244507361200361260ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.handlers; import java.util.TimerTask; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.MessageConsumeData; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; /** * This is the Callback used by the MessageHandlers on the client app when * they've finished consuming a subscription message sent from the server * asynchronously. This callback back to the client libs will be stateless so we * can use a singleton for the class. The object context used should be the * MessageConsumeData type. That will contain all of the information needed to * call the message consume logic in the client lib HChannelHandler. * */ public class MessageConsumeCallback implements Callback { private static Logger logger = LoggerFactory.getLogger(MessageConsumeCallback.class); private final HChannelManager channelManager; private final long consumeRetryWaitTime; public MessageConsumeCallback(ClientConfiguration cfg, HChannelManager channelManager) { this.channelManager = channelManager; this.consumeRetryWaitTime = cfg.getMessageConsumeRetryWaitTime(); } class MessageConsumeRetryTask extends TimerTask { private final MessageConsumeData messageConsumeData; public MessageConsumeRetryTask(MessageConsumeData messageConsumeData) { this.messageConsumeData = messageConsumeData; } @Override public void run() { // Try to consume the message again SubscribeResponseHandler subscribeHChannelHandler = channelManager.getSubscribeResponseHandler(messageConsumeData.topicSubscriber); if (null == subscribeHChannelHandler || !subscribeHChannelHandler.hasSubscription(messageConsumeData.topicSubscriber)) { logger.warn("No subscription {} found to retry delivering message {}.", va(messageConsumeData.topicSubscriber, MessageIdUtils.msgIdToReadableString(messageConsumeData.msg.getMsgId()))); return; } subscribeHChannelHandler.asyncMessageDeliver(messageConsumeData.topicSubscriber, messageConsumeData.msg); } } public void operationFinished(Object ctx, Void resultOfOperation) { MessageConsumeData messageConsumeData = (MessageConsumeData) ctx; SubscribeResponseHandler subscribeHChannelHandler = channelManager.getSubscribeResponseHandler(messageConsumeData.topicSubscriber); if (null == subscribeHChannelHandler || !subscribeHChannelHandler.hasSubscription(messageConsumeData.topicSubscriber)) { logger.warn("No subscription {} found to consume message {}.", va(messageConsumeData.topicSubscriber, MessageIdUtils.msgIdToReadableString(messageConsumeData.msg.getMsgId()))); return; } // Message has been successfully consumed by the client app so callback // to the HChannelHandler indicating that the message is consumed. subscribeHChannelHandler.messageConsumed(messageConsumeData.topicSubscriber, messageConsumeData.msg); } public void operationFailed(Object ctx, PubSubException exception) { // Message has NOT been successfully consumed by the client app so // callback to the HChannelHandler to try the async MessageHandler // Consume logic again. MessageConsumeData messageConsumeData = (MessageConsumeData) ctx; logger.error("Message was not consumed successfully by client MessageHandler: {}", messageConsumeData); // Sleep a pre-configured amount of time (in milliseconds) before we // do the retry. In the future, we can have more dynamic logic on // what duration to sleep based on how many times we've retried, or // perhaps what the last amount of time we slept was. We could stick // some of this meta-data into the MessageConsumeData when we retry. channelManager.schedule(new MessageConsumeRetryTask(messageConsumeData), consumeRetryWaitTime); } } PubSubCallback.java000066400000000000000000000066671244507361200344250ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.handlers; import org.apache.hedwig.protocol.PubSubProtocol; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.util.Callback; /** * This class is used when we are doing synchronous type of operations. All * underlying client ops in Hedwig are async so this is just a way to make the * async calls synchronous. * */ public class PubSubCallback implements Callback { private static Logger logger = LoggerFactory.getLogger(PubSubCallback.class); // Private member variables private final PubSubData pubSubData; // Boolean indicator to see if the sync PubSub call was successful or not. private boolean isCallSuccessful; // For sync callbacks, we'd like to know what the PubSubException is thrown // on failure. This is so we can have a handle to the exception and rethrow // it later. private PubSubException failureException; private PubSubProtocol.ResponseBody responseBody; // Constructor public PubSubCallback(PubSubData pubSubData) { this.pubSubData = pubSubData; } public void operationFinished(Object ctx, PubSubProtocol.ResponseBody resultOfOperation) { logger.debug("PubSub call succeeded for pubSubData: {}", pubSubData); // Wake up the main sync PubSub thread that is waiting for us to // complete. synchronized (pubSubData) { this.responseBody = resultOfOperation; isCallSuccessful = true; pubSubData.isDone = true; pubSubData.notify(); } } public void operationFailed(Object ctx, PubSubException exception) { logger.debug("PubSub call failed with exception: {}, pubSubData: {}", exception, pubSubData); // Wake up the main sync PubSub thread that is waiting for us to // complete. synchronized (pubSubData) { isCallSuccessful = false; failureException = exception; pubSubData.isDone = true; pubSubData.notify(); } } // Public getter to determine if the PubSub callback is successful or not // based on the PubSub ack response from the server. public boolean getIsCallSuccessful() { return isCallSuccessful; } // Public getter to retrieve what the PubSubException was that occurred when // the operation failed. public PubSubException getFailureException() { return failureException; } public PubSubProtocol.ResponseBody getResponseBody() { return responseBody; } } PublishResponseHandler.java000066400000000000000000000062651244507361200362250ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.handlers; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; public class PublishResponseHandler extends AbstractResponseHandler { private static Logger logger = LoggerFactory.getLogger(PublishResponseHandler.class); public PublishResponseHandler(ClientConfiguration cfg, HChannelManager channelManager) { super(cfg, channelManager); } @Override public void handleResponse(PubSubResponse response, PubSubData pubSubData, Channel channel) throws Exception { switch (response.getStatusCode()) { case SUCCESS: // Response was success so invoke the callback's operationFinished // method. pubSubData.operationFinishedToCallback(pubSubData.context, response.hasResponseBody() ? response.getResponseBody() : null); break; case SERVICE_DOWN: // Response was service down failure so just invoke the callback's // operationFailed method. pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException( "Server responded with a SERVICE_DOWN status")); break; case NOT_RESPONSIBLE_FOR_TOPIC: // Redirect response so we'll need to repost the original Publish // Request handleRedirectResponse(response, pubSubData, channel); break; default: // Consider all other status codes as errors, operation failed // cases. logger.error("Unexpected error response from server for PubSubResponse: " + response); pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException( "Server responded with a status code of: " + response.getStatusCode())); break; } } } SubscribeResponseHandler.java000066400000000000000000000164731244507361200365420ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.handlers; import java.net.InetSocketAddress; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.exceptions.NoResponseHandlerException; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.util.Callback; /** * An interface provided to manage all subscriptions on a channel. * * Its responsibility is to handle all subscribe responses received on that channel, * clear up subscriptions and retry reconnectin subscriptions when channel disconnected, * and handle delivering messages to {@link MessageHandler} and sent consume messages * back to hub servers. */ public abstract class SubscribeResponseHandler extends AbstractResponseHandler { protected SubscribeResponseHandler(ClientConfiguration cfg, HChannelManager channelManager) { super(cfg, channelManager); } /** * Handle Message delivered by the server. * * @param response * Message received from the server. */ public abstract void handleSubscribeMessage(PubSubResponse response); /** * Handle a subscription event delivered by the server. * * @param topic * Topic Name * @param subscriberId * Subscriber Id * @param event * Subscription Event describes its status */ public abstract void handleSubscriptionEvent(ByteString topic, ByteString subscriberId, SubscriptionEvent event); /** * Method called when a message arrives for a subscribe Channel and we want * to deliver it asynchronously via the registered MessageHandler (should * not be null when called here). * * @param message * Message from Subscribe Channel we want to consume. */ protected abstract void asyncMessageDeliver(TopicSubscriber topicSubscriber, Message message); /** * Method called when the client app's MessageHandler has asynchronously * completed consuming a subscribed message sent from the server. The * contract with the client app is that messages sent to the handler to be * consumed will have the callback response done in the same order. So if we * asynchronously call the MessageHandler to consume messages #1-5, that * should call the messageConsumed method here via the VoidCallback in the * same order. To make this thread safe, since multiple outstanding messages * could be consumed by the client app and then called back to here, make * this method synchronized. * * @param topicSubscriber * Topic Subscriber * @param message * Message sent from server for topic subscription that has been * consumed by the client. */ protected abstract void messageConsumed(TopicSubscriber topicSubscriber, Message message); /** * Start delivering messages for a given topic subscriber. * * @param topicSubscriber * Topic Subscriber * @param messageHandler * MessageHandler to register for this ResponseHandler instance. * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic * @throws AlreadyStartDeliveryException * If someone started delivery a message handler before stopping existed one. */ public abstract void startDelivery(TopicSubscriber topicSubscriber, MessageHandler messageHandler) throws ClientNotSubscribedException, AlreadyStartDeliveryException; /** * Stop delivering messages for a given topic subscriber. * * @param topicSubscriber * Topic Subscriber * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic */ public abstract void stopDelivery(TopicSubscriber topicSubscriber) throws ClientNotSubscribedException; /** * Whether the given topic subscriber subscribed thru this handler. * * @param topicSubscriber * Topic Subscriber * @return whether the given topic subscriber subscribed thru this handler. */ public abstract boolean hasSubscription(TopicSubscriber topicSubscriber); /** * Close subscription from this handler. * * @param topicSubscriber * Topic Subscriber * @param callback * Callback when the subscription is closed. * @param context * Callback context. */ public abstract void asyncCloseSubscription(TopicSubscriber topicSubscriber, Callback callback, Object context); /** * Consume a given message for given topic subscriber thru this handler. * * @param topicSubscriber * Topic Subscriber */ public abstract void consume(TopicSubscriber topicSubscriber, MessageSeqId messageSeqId); /** * This method is called when the underlying channel is disconnected due to server failure. * * The implementation should take the responsibility to clear subscriptions and retry * reconnecting subscriptions to new hub servers. * * @param host * Host that channel connected to has disconnected. * @param channel * Channel connected to. */ public abstract void onChannelDisconnected(InetSocketAddress host, Channel channel); } UnsubscribeResponseHandler.java000066400000000000000000000100601244507361200370670ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.handlers; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; public class UnsubscribeResponseHandler extends AbstractResponseHandler { private static Logger logger = LoggerFactory.getLogger(UnsubscribeResponseHandler.class); public UnsubscribeResponseHandler(ClientConfiguration cfg, HChannelManager channelManager) { super(cfg, channelManager); } @Override public void handleResponse(final PubSubResponse response, final PubSubData pubSubData, final Channel channel) throws Exception { switch (response.getStatusCode()) { case SUCCESS: // since for unsubscribe request, we close subscription first // for now, we don't need to do anything now. pubSubData.getCallback().operationFinished(pubSubData.context, null); break; case CLIENT_NOT_SUBSCRIBED: // For Unsubscribe requests, the server says that the client was // never subscribed to the topic. pubSubData.getCallback().operationFailed(pubSubData.context, new ClientNotSubscribedException( "Client was never subscribed to topic: " + pubSubData.topic.toStringUtf8() + ", subscriberId: " + pubSubData.subscriberId.toStringUtf8())); break; case SERVICE_DOWN: // Response was service down failure so just invoke the callback's // operationFailed method. pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException( "Server responded with a SERVICE_DOWN status")); break; case NOT_RESPONSIBLE_FOR_TOPIC: // Redirect response so we'll need to repost the original // Unsubscribe Request handleRedirectResponse(response, pubSubData, channel); break; default: // Consider all other status codes as errors, operation failed // cases. logger.error("Unexpected error response from server for PubSubResponse: " + response); pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException( "Server responded with a status code of: " + response.getStatusCode())); break; } } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/000077500000000000000000000000001244507361200303505ustar00rootroot00000000000000CleanupChannelMap.java000066400000000000000000000134771244507361200344660ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import java.util.Collection; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.locks.ReentrantReadWriteLock; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class CleanupChannelMap { private static Logger logger = LoggerFactory.getLogger(CleanupChannelMap.class); private final ConcurrentHashMap channels; // Boolean indicating if the channel map is closed or not. protected boolean closed = false; protected final ReentrantReadWriteLock closedLock = new ReentrantReadWriteLock(); public CleanupChannelMap() { channels = new ConcurrentHashMap(); } /** * Add channel to the map. If an old channel has been bound * to key, the channel would be * closed immediately and the old channel is returned. Otherwise, * the channel is put in the map for future usage. * * If the channel map has been closed, the channel would be closed * immediately. * * @param key * Key * @param channel * Channel * @return the channel instance to use. */ public HChannel addChannel(T key, HChannel channel) { this.closedLock.readLock().lock(); try { if (closed) { channel.close(); return channel; } HChannel oldChannel = channels.putIfAbsent(key, channel); if (null != oldChannel) { logger.info("Channel for {} already exists, so no need to store it.", key); channel.close(); return oldChannel; } else { logger.debug("Storing a new channel for {}.", key); return channel; } } finally { this.closedLock.readLock().unlock(); } } /** * Replace channel only if currently mapped to the given oldChannel. * * @param key * Key * @param oldChannel * Old Channel * @param newChannel * New Channel * @return true if replaced successfully, otherwise false. */ public boolean replaceChannel(T key, HChannel oldChannel, HChannel newChannel) { this.closedLock.readLock().lock(); try { if (closed) { if (null != oldChannel) oldChannel.close(); if (null != newChannel) newChannel.close(); return false; } if (null == oldChannel) { HChannel existedChannel = channels.putIfAbsent(key, newChannel); if (null != existedChannel) { logger.info("Channel for {} already exists, so no need to replace it.", key); newChannel.close(); return false; } else { logger.debug("Storing a new channel for {}.", key); return true; } } else { if (channels.replace(key, oldChannel, newChannel)) { logger.debug("Replacd channel {} for {}.", oldChannel, key); oldChannel.close(); return true; } else { newChannel.close(); return false; } } } finally { this.closedLock.readLock().unlock(); } } /** * Returns the channel bound with key. * * @param key Key * @return the channel bound with key. */ public HChannel getChannel(T key) { return channels.get(key); } /** * Remove the channel bound with key. * * @param key Key * @return the channel bound with key, null if no channel * is bound with key. */ public HChannel removeChannel(T key) { return channels.remove(key); } /** * Remove the channel bound with key. * * @param key Key * @param channel The channel expected to be bound with key. * @return true if the channel is removed, false otherwise. */ public boolean removeChannel(T key, HChannel channel) { return channels.remove(key, channel); } /** * Return the channels in the map. * * @return the set of channels. */ public Collection getChannels() { return channels.values(); } /** * Close the channels map. */ public void close() { closedLock.writeLock().lock(); try { if (closed) { return; } closed = true; } finally { closedLock.writeLock().unlock(); } logger.debug("Closing channels map."); for (HChannel channel : channels.values()) { channel.close(true); } channels.clear(); logger.debug("Closed channels map."); } } FilterableMessageHandler.java000066400000000000000000000043641244507361200360170ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.filter.ClientMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.util.Callback; /** * Handlers used by a subscription. */ public class FilterableMessageHandler implements MessageHandler { MessageHandler msgHandler; ClientMessageFilter msgFilter; public FilterableMessageHandler(MessageHandler msgHandler, ClientMessageFilter msgFilter) { this.msgHandler = msgHandler; this.msgFilter = msgFilter; } public boolean hasMessageHandler() { return null != msgHandler; } public MessageHandler getMessageHandler() { return msgHandler; } public boolean hasMessageFilter() { return null != msgFilter; } public ClientMessageFilter getMessageFilter() { return msgFilter; } @Override public void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { boolean deliver = true; if (hasMessageFilter()) { deliver = msgFilter.testMessage(msg); } if (deliver) { msgHandler.deliver(topic, subscriberId, msg, callback, context); } else { callback.operationFinished(context, null); } } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/HChannel.java000066400000000000000000000030321244507361200326710ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.data.PubSubData; /** * A wrapper interface over netty {@link Channel} to submit hedwig's * {@link PubSubData} requests. */ public interface HChannel { /** * Submit a pub/sub request. * * @param op * Pub/Sub Request. */ public void submitOp(PubSubData op); /** * @return underlying netty channel */ public Channel getChannel(); /** * Close the channel without waiting. */ public void close(); /** * Close the channel * * @param wait * Whether wait until the channel is closed. */ public void close(boolean wait); } HChannelManager.java000066400000000000000000000121741244507361200341140ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import java.net.InetSocketAddress; import java.util.TimerTask; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.handlers.SubscribeResponseHandler; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.util.Callback; /** * A manager manages 1) all channels established to hub servers, * 2) the actions taken by the topic subscribers. */ public interface HChannelManager { /** * Submit a pub/sub request after a given delay. * * @param op * Pub/Sub Request. * @param delay * Delay time in ms. */ public void submitOpAfterDelay(PubSubData op, long delay); /** * Submit a pub/sub request. * * @param pubSubData * Pub/Sub Request. */ public void submitOp(PubSubData pubSubData); /** * Submit a pub/sub request to default server. * * @param pubSubData * Pub/Sub request. */ public void submitOpToDefaultServer(PubSubData pubSubData); /** * Submit a pub/sub request to a given host. * * @param pubSubData * Pub/Sub request. * @param host * Given host address. */ public void redirectToHost(PubSubData pubSubData, InetSocketAddress host); /** * Generate next transaction id for pub/sub request sending thru this manager. * * @return next transaction id. */ public long nextTxnId(); /** * Schedule a timer task after a given delay. * * @param task * A timer task * @param delay * Delay time in ms. */ public void schedule(TimerTask task, long delay); /** * Get the subscribe response handler managed the given topicSubscriber. * * @param topicSubscriber * Topic Subscriber * @return subscribe response handler managed it, otherwise return null. */ public SubscribeResponseHandler getSubscribeResponseHandler( TopicSubscriber topicSubscriber); /** * Start delivering messages for a given topic subscriber. * * @param topicSubscriber * Topic Subscriber * @param messageHandler * MessageHandler to register for this ResponseHandler instance. * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic * @throws AlreadyStartDeliveryException * If someone started delivery a message handler before stopping existed one. */ public void startDelivery(TopicSubscriber topicSubscriber, MessageHandler messageHandler) throws ClientNotSubscribedException, AlreadyStartDeliveryException; /** * Stop delivering messages for a given topic subscriber. * * @param topicSubscriber * Topic Subscriber * @throws ClientNotSubscribedException * If the client is not currently subscribed to the topic */ public void stopDelivery(TopicSubscriber topicSubscriber) throws ClientNotSubscribedException; /** * Close the subscription of the given topicSubscriber. * * @param topicSubscriber * Topic Subscriber * @param callback * Callback * @param context * Callback context */ public void asyncCloseSubscription(TopicSubscriber topicSubscriber, Callback callback, Object context); /** * Return the subscription event emitter to emit subscription events. * * @return subscription event emitter. */ public SubscriptionEventEmitter getSubscriptionEventEmitter(); /** * Is the channel manager closed. * * @return true if the channel manager is closed, otherwise return false. */ public boolean isClosed(); /** * Close the channel manager. */ public void close(); } HedwigClientImpl.java000066400000000000000000000107171244507361200343320ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import java.util.concurrent.Executors; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.ChannelFactory; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.Client; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.netty.impl.simple.SimpleHChannelManager; import org.apache.hedwig.client.netty.impl.multiplex.MultiplexHChannelManager; /** * This is a top level Hedwig Client class that encapsulates the common * functionality needed for both Publish and Subscribe operations. * */ public class HedwigClientImpl implements Client { private static final Logger logger = LoggerFactory.getLogger(HedwigClientImpl.class); // The Netty socket factory for making connections to the server. protected final ChannelFactory socketFactory; // Whether the socket factory is one we created or is owned by whoever // instantiated us. protected boolean ownChannelFactory = false; // channel manager manages all the channels established by the client protected final HChannelManager channelManager; private HedwigSubscriber sub; private final HedwigPublisher pub; private final ClientConfiguration cfg; public static Client create(ClientConfiguration cfg) { return new HedwigClientImpl(cfg); } public static Client create(ClientConfiguration cfg, ChannelFactory socketFactory) { return new HedwigClientImpl(cfg, socketFactory); } // Base constructor that takes in a Configuration object. // This will create its own client socket channel factory. protected HedwigClientImpl(ClientConfiguration cfg) { this(cfg, new NioClientSocketChannelFactory( Executors.newCachedThreadPool(), Executors.newCachedThreadPool())); ownChannelFactory = true; } // Constructor that takes in a Configuration object and a ChannelFactory // that has already been instantiated by the caller. protected HedwigClientImpl(ClientConfiguration cfg, ChannelFactory socketFactory) { this.cfg = cfg; this.socketFactory = socketFactory; if (cfg.isSubscriptionChannelSharingEnabled()) { channelManager = new MultiplexHChannelManager(cfg, socketFactory); } else { channelManager = new SimpleHChannelManager(cfg, socketFactory); } pub = new HedwigPublisher(this); sub = new HedwigSubscriber(this); } public ClientConfiguration getConfiguration() { return cfg; } public HChannelManager getHChannelManager() { return channelManager; } public HedwigSubscriber getSubscriber() { return sub; } // Protected method to set the subscriber. This is needed currently for hub // versions of the client subscriber. protected void setSubscriber(HedwigSubscriber sub) { this.sub = sub; } public HedwigPublisher getPublisher() { return pub; } // When we are done with the client, this is a clean way to gracefully close // all channels/sockets created by the client and to also release all // resources used by netty. public void close() { logger.info("Stopping the client!"); // close channel manager to release all channels channelManager.close(); // Release resources used by the ChannelFactory on the client if we are // the owner that created it. if (ownChannelFactory) { socketFactory.releaseExternalResources(); } logger.info("Completed stopping the client!"); } } HedwigPublisher.java000066400000000000000000000153751244507361200342340ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.handlers.PubSubCallback; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.CouldNotConnectException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PublishResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.util.Callback; /** * This is the Hedwig Netty specific implementation of the Publisher interface. * */ public class HedwigPublisher implements Publisher { private static Logger logger = LoggerFactory.getLogger(HedwigPublisher.class); private final HChannelManager channelManager; protected HedwigPublisher(HedwigClientImpl client) { this.channelManager = client.getHChannelManager(); } public PublishResponse publish(ByteString topic, Message msg) throws CouldNotConnectException, ServiceDownException { if (logger.isDebugEnabled()) { logger.debug("Calling a sync publish for topic: {}, msg: {}.", topic.toStringUtf8(), msg); } PubSubData pubSubData = new PubSubData(topic, msg, null, OperationType.PUBLISH, null, null, null); synchronized (pubSubData) { PubSubCallback pubSubCallback = new PubSubCallback(pubSubData); asyncPublishWithResponseImpl(topic, msg, pubSubCallback, null); try { while (!pubSubData.isDone) pubSubData.wait(); } catch (InterruptedException e) { throw new ServiceDownException("Interrupted Exception while waiting for async publish call"); } // Check from the PubSubCallback if it was successful or not. if (!pubSubCallback.getIsCallSuccessful()) { // See what the exception was that was thrown when the operation // failed. PubSubException failureException = pubSubCallback.getFailureException(); if (failureException == null) { // This should not happen as the operation failed but a null // PubSubException was passed. Log a warning message but // throw a generic ServiceDownException. logger.error("Sync Publish operation failed but no PubSubException was passed!"); throw new ServiceDownException("Server ack response to publish request is not successful"); } // For the expected exceptions that could occur, just rethrow // them. else if (failureException instanceof CouldNotConnectException) { throw (CouldNotConnectException) failureException; } else if (failureException instanceof ServiceDownException) { throw (ServiceDownException) failureException; } else { // For other types of PubSubExceptions, just throw a generic // ServiceDownException but log a warning message. logger.error("Unexpected exception type when a sync publish operation failed: ", failureException); throw new ServiceDownException("Server ack response to publish request is not successful"); } } ResponseBody respBody = pubSubCallback.getResponseBody(); if (null == respBody) { return null; } return respBody.hasPublishResponse() ? respBody.getPublishResponse() : null; } } public void asyncPublish(ByteString topic, Message msg, final Callback callback, Object context) { asyncPublishWithResponseImpl(topic, msg, new VoidCallbackAdapter(callback), context); } public void asyncPublishWithResponse(ByteString topic, Message msg, Callback callback, Object context) { // adapt the callback. asyncPublishWithResponseImpl(topic, msg, new PublishResponseCallbackAdapter(callback), context); } private void asyncPublishWithResponseImpl(ByteString topic, Message msg, Callback callback, Object context) { if (logger.isDebugEnabled()) { logger.debug("Calling an async publish for topic: {}, msg: {}.", topic.toStringUtf8(), msg); } PubSubData pubSubData = new PubSubData(topic, msg, null, OperationType.PUBLISH, null, callback, context); channelManager.submitOp(pubSubData); } private static class PublishResponseCallbackAdapter implements Callback{ private final Callback delegate; private PublishResponseCallbackAdapter(Callback delegate) { this.delegate = delegate; } @Override public void operationFinished(Object ctx, ResponseBody resultOfOperation) { if (null == resultOfOperation) { delegate.operationFinished(ctx, null); } else { delegate.operationFinished(ctx, resultOfOperation.getPublishResponse()); } } @Override public void operationFailed(Object ctx, PubSubException exception) { delegate.operationFailed(ctx, exception); } } } HedwigSubscriber.java000066400000000000000000000545701244507361200344020ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import java.util.List; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.exceptions.InvalidSubscriberIdException; import org.apache.hedwig.client.handlers.PubSubCallback; import org.apache.hedwig.client.handlers.SubscribeResponseHandler; import org.apache.hedwig.filter.ClientMessageFilter; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientAlreadySubscribedException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.CouldNotConnectException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.SubscriptionListener; /** * This is the Hedwig Netty specific implementation of the Subscriber interface. * */ public class HedwigSubscriber implements Subscriber { private static Logger logger = LoggerFactory.getLogger(HedwigSubscriber.class); protected final ClientConfiguration cfg; protected final HChannelManager channelManager; public HedwigSubscriber(HedwigClientImpl client) { this.cfg = client.getConfiguration(); this.channelManager = client.getHChannelManager(); } public void addSubscriptionListener(SubscriptionListener listener) { channelManager.getSubscriptionEventEmitter() .addSubscriptionListener(listener); } public void removeSubscriptionListener(SubscriptionListener listener) { channelManager.getSubscriptionEventEmitter() .removeSubscriptionListener(listener); } // Private method that holds the common logic for doing synchronous // Subscribe or Unsubscribe requests. This is for code reuse since these // two flows are very similar. The assumption is that the input // OperationType is either SUBSCRIBE or UNSUBSCRIBE. private void subUnsub(ByteString topic, ByteString subscriberId, OperationType operationType, SubscriptionOptions options) throws CouldNotConnectException, ClientAlreadySubscribedException, ClientNotSubscribedException, ServiceDownException { if (logger.isDebugEnabled()) { StringBuilder debugMsg = new StringBuilder().append("Calling a sync subUnsub request for topic: ") .append(topic.toStringUtf8()).append(", subscriberId: ") .append(subscriberId.toStringUtf8()).append(", operationType: ") .append(operationType); if (null != options) { debugMsg.append(", createOrAttach: ").append(options.getCreateOrAttach()) .append(", messageBound: ").append(options.getMessageBound()); } logger.debug(debugMsg.toString()); } PubSubData pubSubData = new PubSubData(topic, null, subscriberId, operationType, options, null, null); synchronized (pubSubData) { PubSubCallback pubSubCallback = new PubSubCallback(pubSubData); asyncSubUnsub(topic, subscriberId, pubSubCallback, null, operationType, options); try { while (!pubSubData.isDone) pubSubData.wait(); } catch (InterruptedException e) { throw new ServiceDownException("Interrupted Exception while waiting for async subUnsub call"); } // Check from the PubSubCallback if it was successful or not. if (!pubSubCallback.getIsCallSuccessful()) { // See what the exception was that was thrown when the operation // failed. PubSubException failureException = pubSubCallback.getFailureException(); if (failureException == null) { // This should not happen as the operation failed but a null // PubSubException was passed. Log a warning message but // throw a generic ServiceDownException. logger.error("Sync SubUnsub operation failed but no PubSubException was passed!"); throw new ServiceDownException("Server ack response to SubUnsub request is not successful"); } // For the expected exceptions that could occur, just rethrow // them. else if (failureException instanceof CouldNotConnectException) throw (CouldNotConnectException) failureException; else if (failureException instanceof ClientAlreadySubscribedException) throw (ClientAlreadySubscribedException) failureException; else if (failureException instanceof ClientNotSubscribedException) throw (ClientNotSubscribedException) failureException; else if (failureException instanceof ServiceDownException) throw (ServiceDownException) failureException; else { logger.error("Unexpected PubSubException thrown: ", failureException); // Throw a generic ServiceDownException but wrap the // original PubSubException within it. throw new ServiceDownException(failureException); } } } } // Private method that holds the common logic for doing asynchronous // Subscribe or Unsubscribe requests. This is for code reuse since these two // flows are very similar. The assumption is that the input OperationType is // either SUBSCRIBE or UNSUBSCRIBE. private void asyncSubUnsub(ByteString topic, ByteString subscriberId, Callback callback, Object context, OperationType operationType, SubscriptionOptions options) { if (logger.isDebugEnabled()) { StringBuilder debugMsg = new StringBuilder().append("Calling a async subUnsub request for topic: ") .append(topic.toStringUtf8()).append(", subscriberId: ") .append(subscriberId.toStringUtf8()).append(", operationType: ") .append(operationType); if (null != options) { debugMsg.append(", createOrAttach: ").append(options.getCreateOrAttach()) .append(", messageBound: ").append(options.getMessageBound()); } logger.debug(debugMsg.toString()); } if (OperationType.SUBSCRIBE.equals(operationType)) { if (options.getMessageBound() <= 0 && cfg.getSubscriptionMessageBound() > 0) { SubscriptionOptions.Builder soBuilder = SubscriptionOptions.newBuilder(options).setMessageBound( cfg.getSubscriptionMessageBound()); options = soBuilder.build(); } } PubSubData pubSubData = new PubSubData(topic, null, subscriberId, operationType, options, callback, context); channelManager.submitOp(pubSubData); } public void subscribe(ByteString topic, ByteString subscriberId, CreateOrAttach mode) throws CouldNotConnectException, ClientAlreadySubscribedException, ServiceDownException, InvalidSubscriberIdException { SubscriptionOptions options = SubscriptionOptions.newBuilder().setCreateOrAttach(mode).build(); subscribe(topic, subscriberId, options, false); } public void subscribe(ByteString topic, ByteString subscriberId, SubscriptionOptions options) throws CouldNotConnectException, ClientAlreadySubscribedException, ServiceDownException, InvalidSubscriberIdException { subscribe(topic, subscriberId, options, false); } protected void subscribe(ByteString topic, ByteString subscriberId, SubscriptionOptions options, boolean isHub) throws CouldNotConnectException, ClientAlreadySubscribedException, ServiceDownException, InvalidSubscriberIdException { // Validate that the format of the subscriberId is valid either as a // local or hub subscriber. if (!isValidSubscriberId(subscriberId, isHub)) { throw new InvalidSubscriberIdException("SubscriberId passed is not valid: " + subscriberId.toStringUtf8() + ", isHub: " + isHub); } try { subUnsub(topic, subscriberId, OperationType.SUBSCRIBE, options); } catch (ClientNotSubscribedException e) { logger.error("Unexpected Exception thrown: ", e); // This exception should never be thrown here. But just in case, // throw a generic ServiceDownException but wrap the original // Exception within it. throw new ServiceDownException(e); } } public void asyncSubscribe(ByteString topic, ByteString subscriberId, CreateOrAttach mode, Callback callback, Object context) { SubscriptionOptions options = SubscriptionOptions.newBuilder().setCreateOrAttach(mode).build(); asyncSubscribe(topic, subscriberId, options, callback, context, false); } public void asyncSubscribe(ByteString topic, ByteString subscriberId, SubscriptionOptions options, Callback callback, Object context) { asyncSubscribe(topic, subscriberId, options, callback, context, false); } protected void asyncSubscribe(ByteString topic, ByteString subscriberId, SubscriptionOptions options, Callback callback, Object context, boolean isHub) { // Validate that the format of the subscriberId is valid either as a // local or hub subscriber. if (!isValidSubscriberId(subscriberId, isHub)) { callback.operationFailed(context, new ServiceDownException(new InvalidSubscriberIdException( "SubscriberId passed is not valid: " + subscriberId.toStringUtf8() + ", isHub: " + isHub))); return; } asyncSubUnsub(topic, subscriberId, new VoidCallbackAdapter(callback), context, OperationType.SUBSCRIBE, options); } public void unsubscribe(ByteString topic, ByteString subscriberId) throws CouldNotConnectException, ClientNotSubscribedException, ServiceDownException, InvalidSubscriberIdException { unsubscribe(topic, subscriberId, false); } protected void unsubscribe(ByteString topic, ByteString subscriberId, boolean isHub) throws CouldNotConnectException, ClientNotSubscribedException, ServiceDownException, InvalidSubscriberIdException { // Validate that the format of the subscriberId is valid either as a // local or hub subscriber. if (!isValidSubscriberId(subscriberId, isHub)) { throw new InvalidSubscriberIdException("SubscriberId passed is not valid: " + subscriberId.toStringUtf8() + ", isHub: " + isHub); } // Synchronously close the subscription on the client side. Even // if the unsubscribe request to the server errors out, we won't be // delivering messages for this subscription to the client. The client // can later retry the unsubscribe request to the server so they are // "fully" unsubscribed from the given topic. closeSubscription(topic, subscriberId); try { subUnsub(topic, subscriberId, OperationType.UNSUBSCRIBE, null); } catch (ClientAlreadySubscribedException e) { logger.error("Unexpected Exception thrown: ", e); // This exception should never be thrown here. But just in case, // throw a generic ServiceDownException but wrap the original // Exception within it. throw new ServiceDownException(e); } } public void asyncUnsubscribe(final ByteString topic, final ByteString subscriberId, final Callback callback, final Object context) { doAsyncUnsubscribe(topic, subscriberId, new VoidCallbackAdapter(callback), context, false); } protected void asyncUnsubscribe(final ByteString topic, final ByteString subscriberId, final Callback callback, final Object context, boolean isHub) { doAsyncUnsubscribe(topic, subscriberId, new VoidCallbackAdapter(callback), context, isHub); } private void doAsyncUnsubscribe(final ByteString topic, final ByteString subscriberId, final Callback callback, final Object context, boolean isHub) { // Validate that the format of the subscriberId is valid either as a // local or hub subscriber. if (!isValidSubscriberId(subscriberId, isHub)) { callback.operationFailed(context, new ServiceDownException(new InvalidSubscriberIdException( "SubscriberId passed is not valid: " + subscriberId.toStringUtf8() + ", isHub: " + isHub))); return; } // Asynchronously close the subscription. On the callback to that // operation once it completes, post the async unsubscribe request. doAsyncCloseSubscription(topic, subscriberId, new Callback() { @Override public void operationFinished(Object ctx, ResponseBody resultOfOperation) { asyncSubUnsub(topic, subscriberId, callback, context, OperationType.UNSUBSCRIBE, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { callback.operationFailed(context, exception); } }, null); } // This is a helper method to determine if a subscriberId is valid as either // a hub or local subscriber private boolean isValidSubscriberId(ByteString subscriberId, boolean isHub) { if ((isHub && !SubscriptionStateUtils.isHubSubscriber(subscriberId)) || (!isHub && SubscriptionStateUtils.isHubSubscriber(subscriberId))) return false; else return true; } public void consume(ByteString topic, ByteString subscriberId, MessageSeqId messageSeqId) throws ClientNotSubscribedException { TopicSubscriber topicSubscriber = new TopicSubscriber(topic, subscriberId); logger.debug("Calling consume for {}, messageSeqId: {}.", topicSubscriber, messageSeqId); SubscribeResponseHandler subscribeResponseHandler = channelManager.getSubscribeResponseHandler(topicSubscriber); // Check that this topic subscription on the client side exists. if (null == subscribeResponseHandler || !subscribeResponseHandler.hasSubscription(topicSubscriber)) { throw new ClientNotSubscribedException( "Cannot send consume message since client is not subscribed to topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8()); } // Send the consume message to the server using the same subscribe // channel that the topic subscription uses. subscribeResponseHandler.consume(topicSubscriber, messageSeqId); } public boolean hasSubscription(ByteString topic, ByteString subscriberId) throws CouldNotConnectException, ServiceDownException { // The subscription type of info should be stored on the server end, not // the client side. Eventually, the server will have the Subscription // Manager part that ties into Zookeeper to manage this info. // Commenting out these type of API's related to that here for now until // this data is available on the server. Will figure out what the // correct way to contact the server to get this info is then. // The client side just has soft memory state for client subscription // information. TopicSubscriber topicSubscriber = new TopicSubscriber(topic, subscriberId); SubscribeResponseHandler subscribeResponseHandler = channelManager.getSubscribeResponseHandler(topicSubscriber); return !(null == subscribeResponseHandler || !subscribeResponseHandler.hasSubscription(topicSubscriber)); } public List getSubscriptionList(ByteString subscriberId) throws CouldNotConnectException, ServiceDownException { // Same as the previous hasSubscription method, this data should reside // on the server end, not the client side. return null; } public void startDelivery(final ByteString topic, final ByteString subscriberId, MessageHandler messageHandler) throws ClientNotSubscribedException, AlreadyStartDeliveryException { TopicSubscriber topicSubscriber = new TopicSubscriber(topic, subscriberId); logger.debug("Starting delivery for {}.", topicSubscriber); channelManager.startDelivery(topicSubscriber, messageHandler); } public void startDeliveryWithFilter(final ByteString topic, final ByteString subscriberId, MessageHandler messageHandler, ClientMessageFilter messageFilter) throws ClientNotSubscribedException, AlreadyStartDeliveryException { if (null == messageHandler || null == messageFilter) { throw new NullPointerException("Null message handler or message filter is provided."); } TopicSubscriber topicSubscriber = new TopicSubscriber(topic, subscriberId); messageHandler = new FilterableMessageHandler(messageHandler, messageFilter); logger.debug("Starting delivery with filter for {}.", topicSubscriber); channelManager.startDelivery(topicSubscriber, messageHandler); } public void stopDelivery(final ByteString topic, final ByteString subscriberId) throws ClientNotSubscribedException { TopicSubscriber topicSubscriber = new TopicSubscriber(topic, subscriberId); logger.debug("Stopping delivery for {}.", topicSubscriber); channelManager.stopDelivery(topicSubscriber); } public void closeSubscription(ByteString topic, ByteString subscriberId) throws ServiceDownException { PubSubData pubSubData = new PubSubData(topic, null, subscriberId, null, null, null, null); synchronized (pubSubData) { PubSubCallback pubSubCallback = new PubSubCallback(pubSubData); doAsyncCloseSubscription(topic, subscriberId, pubSubCallback, null); try { while (!pubSubData.isDone) pubSubData.wait(); } catch (InterruptedException e) { throw new ServiceDownException("Interrupted Exception while waiting for asyncCloseSubscription call"); } // Check from the PubSubCallback if it was successful or not. if (!pubSubCallback.getIsCallSuccessful()) { throw new ServiceDownException("Exception while trying to close the subscription for topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8()); } } } public void asyncCloseSubscription(final ByteString topic, final ByteString subscriberId, final Callback callback, final Object context) { doAsyncCloseSubscription(topic, subscriberId, new VoidCallbackAdapter(callback), context); } private void doAsyncCloseSubscription(final ByteString topic, final ByteString subscriberId, final Callback callback, final Object context) { TopicSubscriber topicSubscriber = new TopicSubscriber(topic, subscriberId); logger.debug("Stopping delivery for {} before closing subscription.", topicSubscriber); // We only stop delivery here not in channel manager // Because channelManager#asyncCloseSubscription will called // when subscription channel disconnected to clear local subscription try { channelManager.stopDelivery(topicSubscriber); } catch (ClientNotSubscribedException cnse) { // it is OK to ignore the exception when closing subscription } logger.debug("Closing subscription asynchronously for {}.", topicSubscriber); channelManager.asyncCloseSubscription(topicSubscriber, callback, context); } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/NetUtils.java000066400000000000000000000223341244507361200327660ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import java.net.InetSocketAddress; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.protocol.PubSubProtocol.CloseSubscriptionRequest; import org.apache.hedwig.protocol.PubSubProtocol.ConsumeRequest; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PublishRequest; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.ProtocolVersion; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protocol.PubSubProtocol.UnsubscribeRequest; /** * Utilities for network operations. */ public class NetUtils { /** * Helper static method to get the String Hostname:Port from a netty * Channel. Assumption is that the netty Channel was originally created with * an InetSocketAddress. This is true with the Hedwig netty implementation. * * @param channel * Netty channel to extract the hostname and port from. * @return String representation of the Hostname:Port from the Netty Channel */ public static InetSocketAddress getHostFromChannel(Channel channel) { return (InetSocketAddress) channel.getRemoteAddress(); } /** * This is a helper method to build the actual pub/sub message. * * @param txnId * Transaction Id. * @param pubSubData * Publish call's data wrapper object. * @return pub sub request to send */ public static PubSubRequest.Builder buildPubSubRequest(long txnId, PubSubData pubSubData) { // Create a PubSubRequest PubSubRequest.Builder pubsubRequestBuilder = PubSubRequest.newBuilder(); pubsubRequestBuilder.setProtocolVersion(ProtocolVersion.VERSION_ONE); pubsubRequestBuilder.setType(pubSubData.operationType); // for consume request, we don't need to care about tried servers list if (OperationType.CONSUME != pubSubData.operationType) { if (pubSubData.triedServers != null && pubSubData.triedServers.size() > 0) { pubsubRequestBuilder.addAllTriedServers(pubSubData.triedServers); } } pubsubRequestBuilder.setTxnId(txnId); pubsubRequestBuilder.setShouldClaim(pubSubData.shouldClaim); pubsubRequestBuilder.setTopic(pubSubData.topic); switch (pubSubData.operationType) { case PUBLISH: // Set the PublishRequest into the outer PubSubRequest pubsubRequestBuilder.setPublishRequest(buildPublishRequest(pubSubData)); break; case SUBSCRIBE: // Set the SubscribeRequest into the outer PubSubRequest pubsubRequestBuilder.setSubscribeRequest(buildSubscribeRequest(pubSubData)); break; case UNSUBSCRIBE: // Set the UnsubscribeRequest into the outer PubSubRequest pubsubRequestBuilder.setUnsubscribeRequest(buildUnsubscribeRequest(pubSubData)); break; case CLOSESUBSCRIPTION: // Set the CloseSubscriptionRequest into the outer PubSubRequest pubsubRequestBuilder.setCloseSubscriptionRequest( buildCloseSubscriptionRequest(pubSubData)); break; } // Update the PubSubData with the txnId and the requestWriteTime pubSubData.txnId = txnId; pubSubData.requestWriteTime = System.currentTimeMillis(); return pubsubRequestBuilder; } // build publish request private static PublishRequest.Builder buildPublishRequest(PubSubData pubSubData) { PublishRequest.Builder publishRequestBuilder = PublishRequest.newBuilder(); publishRequestBuilder.setMsg(pubSubData.msg); return publishRequestBuilder; } // build subscribe request private static SubscribeRequest.Builder buildSubscribeRequest(PubSubData pubSubData) { SubscribeRequest.Builder subscribeRequestBuilder = SubscribeRequest.newBuilder(); subscribeRequestBuilder.setSubscriberId(pubSubData.subscriberId); subscribeRequestBuilder.setCreateOrAttach(pubSubData.options.getCreateOrAttach()); subscribeRequestBuilder.setForceAttach(pubSubData.options.getForceAttach()); // For now, all subscribes should wait for all cross-regional // subscriptions to be established before returning. subscribeRequestBuilder.setSynchronous(true); // set subscription preferences SubscriptionPreferences.Builder preferencesBuilder = options2Preferences(pubSubData.options); // backward compatable with 4.1.0 if (preferencesBuilder.hasMessageBound()) { subscribeRequestBuilder.setMessageBound(preferencesBuilder.getMessageBound()); } subscribeRequestBuilder.setPreferences(preferencesBuilder); return subscribeRequestBuilder; } // build unsubscribe request private static UnsubscribeRequest.Builder buildUnsubscribeRequest(PubSubData pubSubData) { // Create the UnSubscribeRequest UnsubscribeRequest.Builder unsubscribeRequestBuilder = UnsubscribeRequest.newBuilder(); unsubscribeRequestBuilder.setSubscriberId(pubSubData.subscriberId); return unsubscribeRequestBuilder; } // build closesubscription request private static CloseSubscriptionRequest.Builder buildCloseSubscriptionRequest(PubSubData pubSubData) { // Create the CloseSubscriptionRequest CloseSubscriptionRequest.Builder closeSubscriptionRequestBuilder = CloseSubscriptionRequest.newBuilder(); closeSubscriptionRequestBuilder.setSubscriberId(pubSubData.subscriberId); return closeSubscriptionRequestBuilder; } /** * Build consume request * * @param txnId * Transaction Id. * @param topicSubscriber * Topic Subscriber. * @param messageSeqId * Message Seq Id. * @return pub/sub request. */ public static PubSubRequest.Builder buildConsumeRequest(long txnId, TopicSubscriber topicSubscriber, MessageSeqId messageSeqId) { // Create a PubSubRequest PubSubRequest.Builder pubsubRequestBuilder = PubSubRequest.newBuilder(); pubsubRequestBuilder.setProtocolVersion(ProtocolVersion.VERSION_ONE); pubsubRequestBuilder.setType(OperationType.CONSUME); pubsubRequestBuilder.setTxnId(txnId); pubsubRequestBuilder.setTopic(topicSubscriber.getTopic()); // Create the ConsumeRequest ConsumeRequest.Builder consumeRequestBuilder = ConsumeRequest.newBuilder(); consumeRequestBuilder.setSubscriberId(topicSubscriber.getSubscriberId()); consumeRequestBuilder.setMsgId(messageSeqId); pubsubRequestBuilder.setConsumeRequest(consumeRequestBuilder); return pubsubRequestBuilder; } /** * Convert client-side subscription options to subscription preferences * * @param options * Client-Side subscription options * @return subscription preferences */ private static SubscriptionPreferences.Builder options2Preferences(SubscriptionOptions options) { // prepare subscription preferences SubscriptionPreferences.Builder preferencesBuilder = SubscriptionPreferences.newBuilder(); // set message bound if (options.getMessageBound() > 0) { preferencesBuilder.setMessageBound(options.getMessageBound()); } // set message filter if (options.hasMessageFilter()) { preferencesBuilder.setMessageFilter(options.getMessageFilter()); } // set user options if (options.hasOptions()) { preferencesBuilder.setOptions(options.getOptions()); } // set message window size if set if (options.hasMessageWindowSize() && options.getMessageWindowSize() > 0) { preferencesBuilder.setMessageWindowSize(options.getMessageWindowSize()); } return preferencesBuilder; } } SubscriptionEventEmitter.java000066400000000000000000000034171244507361200361610ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import java.util.concurrent.CopyOnWriteArraySet; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.util.SubscriptionListener; public class SubscriptionEventEmitter { private final CopyOnWriteArraySet listeners; public SubscriptionEventEmitter() { listeners = new CopyOnWriteArraySet(); } public void addSubscriptionListener(SubscriptionListener listener) { listeners.add(listener); } public void removeSubscriptionListener(SubscriptionListener listener) { listeners.remove(listener); } public void emitSubscriptionEvent(ByteString topic, ByteString subscriberId, SubscriptionEvent event) { for (SubscriptionListener listener : listeners) { listener.processEvent(topic, subscriberId, event); } } } VoidCallbackAdapter.java000066400000000000000000000027571244507361200347660ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.util.Callback; /** * Adapts from Callback<T> to Callback<Void>. (Ignores the <T> parameter). */ public class VoidCallbackAdapter implements Callback { private final Callback delegate; public VoidCallbackAdapter(Callback delegate){ this.delegate = delegate; } @Override public void operationFinished(Object ctx, T resultOfOperation) { delegate.operationFinished(ctx, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { delegate.operationFailed(ctx, exception); } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/000077500000000000000000000000001244507361200313115ustar00rootroot00000000000000AbstractHChannelManager.java000066400000000000000000000622551244507361200365460ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import java.net.InetSocketAddress; import java.util.HashSet; import java.util.Set; import java.util.Timer; import java.util.TimerTask; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.locks.ReentrantReadWriteLock; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFactory; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.exceptions.NoResponseHandlerException; import org.apache.hedwig.client.handlers.MessageConsumeCallback; import org.apache.hedwig.client.handlers.SubscribeResponseHandler; import org.apache.hedwig.client.netty.CleanupChannelMap; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.client.netty.SubscriptionEventEmitter; import org.apache.hedwig.client.ssl.SslClientContextFactory; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.filter.ClientMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageHeader; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; /** * Basic HChannel Manager Implementation */ public abstract class AbstractHChannelManager implements HChannelManager { private static Logger logger = LoggerFactory.getLogger(AbstractHChannelManager.class); // Empty Topic List private final static Set EMPTY_TOPIC_SET = new HashSet(); // Boolean indicating if the channel manager is running or has been closed. // Once we stop the manager, we should sidestep all of the connect, write callback // and channel disconnected logic. protected boolean closed = false; protected final ReentrantReadWriteLock closedLock = new ReentrantReadWriteLock(); // Global counter used for generating unique transaction ID's for // publish and subscribe requests protected final AtomicLong globalCounter = new AtomicLong(); // Concurrent Map to store the mapping from the Topic to the Host. // This could change over time since servers can drop mastership of topics // for load balancing or failover. If a server host ever goes down, we'd // also want to remove all topic mappings the host was responsible for. // The second Map is used as the inverted version of the first one. protected final ConcurrentMap topic2Host = new ConcurrentHashMap(); // The inverse mapping is used only when clearing all topics. For performance // consideration, we don't guarantee host2Topics to be consistent with // topic2Host. it would be better to not rely on this mapping for anything // significant. protected final ConcurrentMap> host2Topics = new ConcurrentHashMap>(); // This channels will be used for publish and unsubscribe requests protected final CleanupChannelMap host2NonSubscriptionChannels = new CleanupChannelMap(); private final ClientConfiguration cfg; // The Netty socket factory for making connections to the server. protected final ChannelFactory socketFactory; // PipelineFactory to create non-subscription netty channels to the appropriate server private final ClientChannelPipelineFactory nonSubscriptionChannelPipelineFactory; // ssl context factory private SslClientContextFactory sslFactory = null; // default server channel private final HChannel defaultServerChannel; // Each client instantiation will have a Timer for running recurring // threads. One such timer task thread to is to timeout long running // PubSubRequests that are waiting for an ack response from the server. private final Timer clientTimer = new Timer(true); // a common consume callback for all consume requests. private final MessageConsumeCallback consumeCb; // A event emitter to emit subscription events private final SubscriptionEventEmitter eventEmitter; protected AbstractHChannelManager(ClientConfiguration cfg, ChannelFactory socketFactory) { this.cfg = cfg; this.socketFactory = socketFactory; this.nonSubscriptionChannelPipelineFactory = new NonSubscriptionChannelPipelineFactory(cfg, this); // create a default server channel defaultServerChannel = new DefaultServerChannel(cfg.getDefaultServerHost(), this); if (cfg.isSSLEnabled()) { sslFactory = new SslClientContextFactory(cfg); } consumeCb = new MessageConsumeCallback(cfg, this); eventEmitter = new SubscriptionEventEmitter(); // Schedule Request Timeout task. clientTimer.schedule(new PubSubRequestTimeoutTask(), 0, cfg.getTimeoutThreadRunInterval()); } @Override public SubscriptionEventEmitter getSubscriptionEventEmitter() { return eventEmitter; } public MessageConsumeCallback getConsumeCallback() { return consumeCb; } public SslClientContextFactory getSslFactory() { return sslFactory; } protected ChannelFactory getChannelFactory() { return socketFactory; } protected ClientChannelPipelineFactory getNonSubscriptionChannelPipelineFactory() { return this.nonSubscriptionChannelPipelineFactory; } protected abstract ClientChannelPipelineFactory getSubscriptionChannelPipelineFactory(); @Override public void schedule(final TimerTask task, final long delay) { this.closedLock.readLock().lock(); try { if (closed) { logger.warn("Task {} is not scheduled due to the channel manager is closed.", task); return; } clientTimer.schedule(task, delay); } finally { this.closedLock.readLock().unlock(); } } @Override public void submitOpAfterDelay(final PubSubData pubSubData, final long delay) { this.closedLock.readLock().lock(); try { if (closed) { pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException("Client has been closed.")); return; } clientTimer.schedule(new TimerTask() { @Override public void run() { logger.debug("Submit request {} in {} ms later.", va(pubSubData, delay)); submitOp(pubSubData); } }, delay); } finally { closedLock.readLock().unlock(); } } @Override public void submitOp(PubSubData pubSubData) { HChannel hChannel; if (OperationType.PUBLISH.equals(pubSubData.operationType) || OperationType.UNSUBSCRIBE.equals(pubSubData.operationType)) { hChannel = getNonSubscriptionChannelByTopic(pubSubData.topic); } else { TopicSubscriber ts = new TopicSubscriber(pubSubData.topic, pubSubData.subscriberId); hChannel = getSubscriptionChannelByTopicSubscriber(ts); } // no channel found to submit pubsub data // choose the default server if (null == hChannel) { hChannel = defaultServerChannel; } hChannel.submitOp(pubSubData); } @Override public void redirectToHost(PubSubData pubSubData, InetSocketAddress host) { logger.debug("Submit operation {} to host {}.", va(pubSubData, host)); HChannel hChannel; if (OperationType.PUBLISH.equals(pubSubData.operationType) || OperationType.UNSUBSCRIBE.equals(pubSubData.operationType)) { hChannel = getNonSubscriptionChannel(host); if (null == hChannel) { // create a channel to connect to specified host hChannel = createAndStoreNonSubscriptionChannel(host); } } else { hChannel = getSubscriptionChannel(host); if (null == hChannel) { // create a subscription channel to specified host hChannel = createAndStoreSubscriptionChannel(host); } } // no channel found to submit pubsub data // choose the default server if (null == hChannel) { hChannel = defaultServerChannel; } hChannel.submitOp(pubSubData); } void submitOpThruChannel(PubSubData pubSubData, Channel channel) { logger.debug("Submit operation {} to thru channel {}.", va(pubSubData, channel)); HChannel hChannel; if (OperationType.PUBLISH.equals(pubSubData.operationType) || OperationType.UNSUBSCRIBE.equals(pubSubData.operationType)) { hChannel = createAndStoreNonSubscriptionChannel(channel); } else { hChannel = createAndStoreSubscriptionChannel(channel); } hChannel.submitOp(pubSubData); } @Override public void submitOpToDefaultServer(PubSubData pubSubData) { logger.debug("Submit operation {} to default server {}.", va(pubSubData, defaultServerChannel)); defaultServerChannel.submitOp(pubSubData); } // Synchronized method to store the host2Channel mapping (if it doesn't // exist yet). Retrieve the hostname info from the Channel created via the // RemoteAddress tied to it. private HChannel createAndStoreNonSubscriptionChannel(Channel channel) { InetSocketAddress host = NetUtils.getHostFromChannel(channel); HChannel newHChannel = new HChannelImpl(host, channel, this, getNonSubscriptionChannelPipelineFactory()); return storeNonSubscriptionChannel(host, newHChannel); } private HChannel createAndStoreNonSubscriptionChannel(InetSocketAddress host) { HChannel newHChannel = new HChannelImpl(host, this, getNonSubscriptionChannelPipelineFactory()); return storeNonSubscriptionChannel(host, newHChannel); } private HChannel storeNonSubscriptionChannel(InetSocketAddress host, HChannel newHChannel) { return host2NonSubscriptionChannels.addChannel(host, newHChannel); } /** * Is there a {@link HChannel} existed for a given host. * * @param host * Target host address. */ private HChannel getNonSubscriptionChannel(InetSocketAddress host) { return host2NonSubscriptionChannels.getChannel(host); } /** * Get a non-subscription channel for a given topic. * * @param topic * Topic Name * @return if topic's owner is unknown, return null. * if topic's owner is know and there is a channel * existed before, return the existed channel, otherwise created * a new one. */ private HChannel getNonSubscriptionChannelByTopic(ByteString topic) { InetSocketAddress host = topic2Host.get(topic); if (null == host) { // we don't know where is the topic return null; } else { // we had know which server owned the topic HChannel channel = getNonSubscriptionChannel(host); if (null == channel) { // create a channel to connect to specified host channel = createAndStoreNonSubscriptionChannel(host); } return channel; } } /** * Handle the disconnected event from a non-subscription {@link HChannel}. * * @param host * Which host is disconnected. * @param channel * The underlying established channel. */ protected void onNonSubscriptionChannelDisconnected(InetSocketAddress host, Channel channel) { // Only remove the Channel from the mapping if this current // disconnected channel is the same as the cached entry. // Due to race concurrency situations, it is possible to // create multiple channels to the same host for publish // and unsubscribe requests. HChannel hChannel = host2NonSubscriptionChannels.getChannel(host); if (null == hChannel) { return; } Channel underlyingChannel = hChannel.getChannel(); if (null == underlyingChannel || !underlyingChannel.equals(channel)) { return; } logger.info("NonSubscription Channel {} to {} disconnected.", va(channel, host)); // remove existed channel if (host2NonSubscriptionChannels.removeChannel(host, hChannel)) { clearAllTopicsForHost(host); } } /** * Create and store a subscription {@link HChannel} thru the underlying established * channel * * @param channel * The underlying established subscription channel. */ protected abstract HChannel createAndStoreSubscriptionChannel(Channel channel); /** * Create and store a subscription {@link HChannel} to target host. * * @param host * Target host address. */ protected abstract HChannel createAndStoreSubscriptionChannel(InetSocketAddress host); /** * Is there a subscription {@link HChannel} existed for a given host. * * @param host * Target host address. */ protected abstract HChannel getSubscriptionChannel(InetSocketAddress host); /** * Get a subscription channel for a given topicSubscriber. * * @param topicSubscriber * Topic Subscriber * @return if topic's owner is unknown, return null. * if topic's owner is know and there is a channel * existed before, return the existed channel, otherwise created * a new one for the topicSubscriber. */ protected abstract HChannel getSubscriptionChannelByTopicSubscriber(TopicSubscriber topicSubscriber); /** * Handle the disconnected event from a subscription {@link HChannel}. * * @param host * Which host is disconnected. * @param channel * The underlying established channel. */ protected abstract void onSubscriptionChannelDisconnected(InetSocketAddress host, Channel channel); private void sendConsumeRequest(final TopicSubscriber topicSubscriber, final MessageSeqId messageSeqId, final Channel channel) { PubSubRequest.Builder pubsubRequestBuilder = NetUtils.buildConsumeRequest(nextTxnId(), topicSubscriber, messageSeqId); // For Consume requests, we will send them from the client in a fire and // forget manner. We are not expecting the server to send back an ack // response so no need to register this in the ResponseHandler. There // are no callbacks to invoke since this isn't a client initiated // action. Instead, just have a future listener that will log an error // message if there was a problem writing the consume request. logger.debug("Writing a Consume request to host: {} with messageSeqId: {} for {}", va(NetUtils.getHostFromChannel(channel), messageSeqId, topicSubscriber)); ChannelFuture future = channel.write(pubsubRequestBuilder.build()); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { logger.error("Error writing a Consume request to host: {} with messageSeqId: {} for {}", va(NetUtils.getHostFromChannel(channel), messageSeqId, topicSubscriber)); } } }); } /** * Helper method to store the topic2Host mapping in the channel manager cache * map. This method is assumed to be called when we've done a successful * connection to the correct server topic master. * * @param topic * Topic Name * @param host * Host Address */ protected void storeTopic2HostMapping(ByteString topic, InetSocketAddress host) { InetSocketAddress oldHost = topic2Host.putIfAbsent(topic, host); if (null != oldHost && oldHost.equals(host)) { // Entry in map exists for the topic but it is the same as the // current host. In this case there is nothing to do. return; } if (null != oldHost) { if (topic2Host.replace(topic, oldHost, host)) { // Store the relevant mappings for this topic and host combination. logger.debug("Storing info for topic: {}, old host: {}, new host: {}.", va(topic.toStringUtf8(), oldHost, host)); clearHostForTopic(topic, oldHost); } else { logger.warn("Ownership of topic: {} has been changed from {} to {} when storeing host: {}", va(topic.toStringUtf8(), oldHost, topic2Host.get(topic), host)); return; } } else { logger.debug("Storing info for topic: {}, host: {}.", va(topic.toStringUtf8(), host)); } Set topicsForHost = host2Topics.get(host); if (null == topicsForHost) { Set newTopicsSet = new HashSet(); topicsForHost = host2Topics.putIfAbsent(host, newTopicsSet); if (null == topicsForHost) { topicsForHost = newTopicsSet; } } synchronized (topicsForHost) { // check whether the ownership changed, since it might happened // after replace succeed if (host.equals(topic2Host.get(topic))) { topicsForHost.add(topic); } } } // If a server host goes down or the channel to it gets disconnected, // we want to clear out all relevant cached information. We'll // need to remove all of the topic mappings that the host was // responsible for. protected void clearAllTopicsForHost(InetSocketAddress host) { logger.debug("Clearing all topics for host: {}", host); // For each of the topics that the host was responsible for, // remove it from the topic2Host mapping. Set topicsForHost = host2Topics.get(host); if (null != topicsForHost) { synchronized (topicsForHost) { for (ByteString topic : topicsForHost) { logger.debug("Removing mapping for topic: {} from host: {}.", va(topic.toStringUtf8(), host)); topic2Host.remove(topic, host); } } // Now it is safe to remove the host2Topics mapping entry. host2Topics.remove(host, topicsForHost); } } // If a subscribe channel goes down, the topic might have moved. // We only clear out that topic for the host and not all cached information. public void clearHostForTopic(ByteString topic, InetSocketAddress host) { logger.debug("Clearing topic: {} from host: {}.", va(topic.toStringUtf8(), host)); if (topic2Host.remove(topic, host)) { logger.debug("Removed topic to host mapping for topic: {} and host: {}.", va(topic.toStringUtf8(), host)); } Set topicsForHost = host2Topics.get(host); if (null != topicsForHost) { boolean removed; synchronized (topicsForHost) { removed = topicsForHost.remove(topic); } if (removed) { logger.debug("Removed topic: {} from host: {}.", topic.toStringUtf8(), host); if (topicsForHost.isEmpty()) { // remove only topic list is empty host2Topics.remove(host, EMPTY_TOPIC_SET); } } } } @Override public long nextTxnId() { return globalCounter.incrementAndGet(); } // We need to deal with the possible problem of a PubSub request being // written to successfully to the server host but for some reason, the // ack message back never comes. What could happen is that the VoidCallback // stored in the ResponseHandler.txn2PublishData map will never be called. // We should have a configured timeout so if that passes from the time a // write was successfully done to the server, we can fail this async PubSub // transaction. The caller could possibly redo the transaction if needed at // a later time. Creating a timeout cleaner TimerTask to do this here. class PubSubRequestTimeoutTask extends TimerTask { /** * Implement the TimerTask's abstract run method. */ @Override public void run() { if (isClosed()) { return; } logger.debug("Running the PubSubRequest Timeout Task"); // First check those non-subscription channels for (HChannel channel : host2NonSubscriptionChannels.getChannels()) { try { HChannelHandler channelHandler = HChannelImpl.getHChannelHandlerFromChannel(channel.getChannel()); channelHandler.checkTimeoutRequests(); } catch (NoResponseHandlerException nrhe) { continue; } } // Then check those subscription channels checkTimeoutRequestsOnSubscriptionChannels(); } } protected abstract void restartDelivery(TopicSubscriber topicSubscriber) throws ClientNotSubscribedException, AlreadyStartDeliveryException; /** * Chekout the pub/sub requests on subscription channels. */ protected abstract void checkTimeoutRequestsOnSubscriptionChannels(); @Override public boolean isClosed() { closedLock.readLock().lock(); try { return closed; } finally { closedLock.readLock().unlock(); } } /** * Close all subscription channels when close channel manager. */ protected abstract void closeSubscriptionChannels(); @Override public void close() { logger.info("Shutting down the channels manager."); closedLock.writeLock().lock(); try { // Not first time to close if (closed) { return; } closed = true; } finally { closedLock.writeLock().unlock(); } clientTimer.cancel(); // Clear all existed channels host2NonSubscriptionChannels.close(); // clear all subscription channels closeSubscriptionChannels(); // Clear out all Maps topic2Host.clear(); host2Topics.clear(); } } AbstractSubscribeResponseHandler.java000066400000000000000000000377231244507361200405330ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import java.net.InetSocketAddress; import java.util.LinkedList; import java.util.Queue; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.locks.ReentrantReadWriteLock; import com.google.protobuf.ByteString; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.MessageConsumeData; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.handlers.SubscribeResponseHandler; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.client.netty.FilterableMessageHandler; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientAlreadySubscribedException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.exceptions.PubSubException.UnexpectedConditionException; import org.apache.hedwig.filter.ClientMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeResponse; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.Either; import org.apache.hedwig.util.SubscriptionListener; import static org.apache.hedwig.util.VarArgs.va; public abstract class AbstractSubscribeResponseHandler extends SubscribeResponseHandler { private static Logger logger = LoggerFactory.getLogger(AbstractSubscribeResponseHandler.class); protected final ReentrantReadWriteLock disconnectLock = new ReentrantReadWriteLock(); protected final ConcurrentMap subscriptions = new ConcurrentHashMap(); protected final AbstractHChannelManager aChannelManager; protected AbstractSubscribeResponseHandler(ClientConfiguration cfg, HChannelManager channelManager) { super(cfg, channelManager); this.aChannelManager = (AbstractHChannelManager) channelManager; } protected HChannelManager getHChannelManager() { return this.channelManager; } protected ClientConfiguration getConfiguration() { return cfg; } protected ActiveSubscriber getActiveSubscriber(TopicSubscriber ts) { return subscriptions.get(ts); } protected ActiveSubscriber createActiveSubscriber( ClientConfiguration cfg, AbstractHChannelManager channelManager, TopicSubscriber ts, PubSubData op, SubscriptionPreferences preferences, Channel channel, HChannel hChannel) { return new ActiveSubscriber(cfg, channelManager, ts, op, preferences, channel, hChannel); } @Override public void handleResponse(PubSubResponse response, PubSubData pubSubData, Channel channel) throws Exception { if (logger.isDebugEnabled()) { logger.debug("Handling a Subscribe response: {}, pubSubData: {}, host: {}.", va(response, pubSubData, NetUtils.getHostFromChannel(channel))); } switch (response.getStatusCode()) { case SUCCESS: TopicSubscriber ts = new TopicSubscriber(pubSubData.topic, pubSubData.subscriberId); SubscriptionPreferences preferences = null; if (response.hasResponseBody()) { ResponseBody respBody = response.getResponseBody(); if (respBody.hasSubscribeResponse()) { SubscribeResponse resp = respBody.getSubscribeResponse(); if (resp.hasPreferences()) { preferences = resp.getPreferences(); if (logger.isDebugEnabled()) { logger.debug("Receive subscription preferences for {} : {}", va(ts, SubscriptionStateUtils.toString(preferences))); } } } } Either result; StatusCode statusCode; ActiveSubscriber ss = null; // Store the Subscribe state disconnectLock.readLock().lock(); try { result = handleSuccessResponse(ts, pubSubData, channel); statusCode = result.left(); if (StatusCode.SUCCESS == statusCode) { ss = createActiveSubscriber( cfg, aChannelManager, ts, pubSubData, preferences, channel, result.right()); statusCode = addSubscription(ts, ss); } } finally { disconnectLock.readLock().unlock(); } if (StatusCode.SUCCESS == statusCode) { postHandleSuccessResponse(ts, ss); // Response was success so invoke the callback's operationFinished // method. pubSubData.getCallback().operationFinished(pubSubData.context, null); } else { PubSubException exception = PubSubException.create(statusCode, "Client is already subscribed for " + ts); pubSubData.getCallback().operationFailed(pubSubData.context, exception); } break; case CLIENT_ALREADY_SUBSCRIBED: // For Subscribe requests, the server says that the client is // already subscribed to it. pubSubData.getCallback().operationFailed(pubSubData.context, new ClientAlreadySubscribedException("Client is already subscribed for topic: " + pubSubData.topic.toStringUtf8() + ", subscriberId: " + pubSubData.subscriberId.toStringUtf8())); break; case SERVICE_DOWN: // Response was service down failure so just invoke the callback's // operationFailed method. pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException( "Server responded with a SERVICE_DOWN status")); break; case NOT_RESPONSIBLE_FOR_TOPIC: // Redirect response so we'll need to repost the original Subscribe // Request handleRedirectResponse(response, pubSubData, channel); break; default: // Consider all other status codes as errors, operation failed // cases. logger.error("Unexpected error response from server for PubSubResponse: " + response); pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException("Server responded with a status code of: " + response.getStatusCode(), PubSubException.create(response.getStatusCode(), "Original Exception"))); break; } } /** * Handle success response for a specific TopicSubscriber ts. The method * is triggered after subscribed successfully. * * @param ts * Topic Subscriber. * @param pubSubData * Pub/Sub Request data for this subscribe request. * @param channel * Subscription Channel. * @return status code to indicate what happened */ protected abstract Either handleSuccessResponse( TopicSubscriber ts, PubSubData pubSubData, Channel channel); protected void postHandleSuccessResponse(TopicSubscriber ts, ActiveSubscriber ss) { // do nothing now } private StatusCode addSubscription(TopicSubscriber ts, ActiveSubscriber ss) { ActiveSubscriber oldSS = subscriptions.putIfAbsent(ts, ss); if (null != oldSS) { return StatusCode.CLIENT_ALREADY_SUBSCRIBED; } else { return StatusCode.SUCCESS; } } @Override public void handleSubscribeMessage(PubSubResponse response) { Message message = response.getMessage(); TopicSubscriber ts = new TopicSubscriber(response.getTopic(), response.getSubscriberId()); if (logger.isDebugEnabled()) { logger.debug("Handling a Subscribe message in response: {}, {}", va(response, ts)); } ActiveSubscriber ss = getActiveSubscriber(ts); if (null == ss) { logger.error("Subscriber {} is not found receiving its message {}.", va(ts, MessageIdUtils.msgIdToReadableString(message.getMsgId()))); return; } ss.handleMessage(message); } @Override protected void asyncMessageDeliver(TopicSubscriber topicSubscriber, Message message) { ActiveSubscriber ss = getActiveSubscriber(topicSubscriber); if (null == ss) { logger.error("Subscriber {} is not found delivering its message {}.", va(topicSubscriber, MessageIdUtils.msgIdToReadableString(message.getMsgId()))); return; } ss.asyncMessageDeliver(message); } @Override protected void messageConsumed(TopicSubscriber topicSubscriber, Message message) { ActiveSubscriber ss = getActiveSubscriber(topicSubscriber); if (null == ss) { logger.warn("Subscriber {} is not found consumed its message {}.", va(topicSubscriber, MessageIdUtils.msgIdToReadableString(message.getMsgId()))); return; } if (logger.isDebugEnabled()) { logger.debug("Message has been successfully consumed by the client app : {}, {}", va(message, topicSubscriber)); } ss.messageConsumed(message); } @Override public void handleSubscriptionEvent(ByteString topic, ByteString subscriberId, SubscriptionEvent event) { TopicSubscriber ts = new TopicSubscriber(topic, subscriberId); ActiveSubscriber ss = getActiveSubscriber(ts); if (null == ss) { logger.warn("No subscription {} found receiving subscription event {}.", va(ts, event)); return; } if (logger.isDebugEnabled()) { logger.debug("Received subscription event {} for ({}).", va(event, ts)); } processSubscriptionEvent(ss, event); } protected void processSubscriptionEvent(ActiveSubscriber as, SubscriptionEvent event) { switch (event) { // for all cases we need to resubscribe for the subscription case TOPIC_MOVED: case SUBSCRIPTION_FORCED_CLOSED: resubscribeIfNecessary(as, event); break; default: logger.error("Receive unknown subscription event {} for {}.", va(event, as.getTopicSubscriber())); } } @Override public void startDelivery(final TopicSubscriber topicSubscriber, MessageHandler messageHandler) throws ClientNotSubscribedException, AlreadyStartDeliveryException { ActiveSubscriber ss = getActiveSubscriber(topicSubscriber); if (null == ss) { throw new ClientNotSubscribedException("Client is not yet subscribed to " + topicSubscriber); } if (logger.isDebugEnabled()) { logger.debug("Start delivering message for {} using message handler {}", va(topicSubscriber, messageHandler)); } ss.startDelivery(messageHandler); } @Override public void stopDelivery(final TopicSubscriber topicSubscriber) throws ClientNotSubscribedException { ActiveSubscriber ss = getActiveSubscriber(topicSubscriber); if (null == ss) { throw new ClientNotSubscribedException("Client is not yet subscribed to " + topicSubscriber); } if (logger.isDebugEnabled()) { logger.debug("Stop delivering messages for {}", topicSubscriber); } ss.stopDelivery(); } @Override public boolean hasSubscription(TopicSubscriber topicSubscriber) { return subscriptions.containsKey(topicSubscriber); } @Override public void consume(final TopicSubscriber topicSubscriber, final MessageSeqId messageSeqId) { ActiveSubscriber ss = getActiveSubscriber(topicSubscriber); if (null == ss) { logger.warn("Subscriber {} is not found consuming message {}.", va(topicSubscriber, MessageIdUtils.msgIdToReadableString(messageSeqId))); return; } ss.consume(messageSeqId); } @Override public void onChannelDisconnected(InetSocketAddress host, Channel channel) { disconnectLock.writeLock().lock(); try { onDisconnect(host); } finally { disconnectLock.writeLock().unlock(); } } private void onDisconnect(InetSocketAddress host) { for (ActiveSubscriber ss : subscriptions.values()) { onDisconnect(ss, host); } } private void onDisconnect(ActiveSubscriber ss, InetSocketAddress host) { logger.info("Subscription channel for ({}) is disconnected.", ss); resubscribeIfNecessary(ss, SubscriptionEvent.TOPIC_MOVED); } protected boolean removeSubscription(TopicSubscriber ts, ActiveSubscriber ss) { return subscriptions.remove(ts, ss); } protected void resubscribeIfNecessary(ActiveSubscriber ss, SubscriptionEvent event) { // if subscriber has been changed, we don't need to resubscribe if (!removeSubscription(ss.getTopicSubscriber(), ss)) { return; } ss.resubscribeIfNecessary(event); } } ActiveSubscriber.java000066400000000000000000000400771244507361200353440ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import static org.apache.hedwig.util.VarArgs.va; import java.util.LinkedList; import java.util.Queue; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.MessageConsumeData; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.netty.FilterableMessageHandler; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.filter.ClientMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * an active subscriber handles subscription actions in a channel. */ public class ActiveSubscriber { private static final Logger logger = LoggerFactory.getLogger(ActiveSubscriber.class); protected final ClientConfiguration cfg; protected final AbstractHChannelManager channelManager; // Subscriber related variables protected final TopicSubscriber topicSubscriber; protected final PubSubData op; protected final SubscriptionPreferences preferences; // the underlying netty channel to send request protected final Channel channel; protected final HChannel hChannel; // Counter for the number of consumed messages so far to buffer up before we // send the Consume message back to the server along with the last/largest // message seq ID seen so far in that batch. private int numConsumedMessagesInBuffer = 0; private MessageSeqId lastMessageSeqId = null; // Message Handler private MessageHandler msgHandler = null; // Queue used for subscribes when the MessageHandler hasn't been registered // yet but we've already received subscription messages from the server. // This will be lazily created as needed. private final Queue msgQueue = new LinkedList(); /** * Construct an active subscriber instance. * * @param cfg * Client configuration object. * @param channelManager * Channel manager instance. * @param ts * Topic subscriber. * @param op * Pub/Sub request. * @param preferences * Subscription preferences for the subscriber. * @param channel * Netty channel the subscriber lived. */ public ActiveSubscriber(ClientConfiguration cfg, AbstractHChannelManager channelManager, TopicSubscriber ts, PubSubData op, SubscriptionPreferences preferences, Channel channel, HChannel hChannel) { this.cfg = cfg; this.channelManager = channelManager; this.topicSubscriber = ts; this.op = op; this.preferences = preferences; this.channel = channel; this.hChannel = hChannel; } /** * @return pub/sub request for the subscription. */ public PubSubData getPubSubData() { return this.op; } /** * @return topic subscriber id for the active subscriber. */ public TopicSubscriber getTopicSubscriber() { return this.topicSubscriber; } /** * Start delivering messages using given message handler. * * @param messageHandler * Message handler to deliver messages * @throws AlreadyStartDeliveryException if someone already started delivery. * @throws ClientNotSubscribedException when start delivery before subscribe. */ public synchronized void startDelivery(MessageHandler messageHandler) throws AlreadyStartDeliveryException, ClientNotSubscribedException { if (null != this.msgHandler) { throw new AlreadyStartDeliveryException("A message handler " + msgHandler + " has been started for " + topicSubscriber); } if (null != messageHandler && messageHandler instanceof FilterableMessageHandler) { FilterableMessageHandler filterMsgHandler = (FilterableMessageHandler) messageHandler; if (filterMsgHandler.hasMessageFilter()) { if (null == preferences) { // no preferences means talking to an old version hub server logger.warn("Start delivering messages with filter but no subscription " + "preferences found. It might due to talking to an old version" + " hub server."); // use the original message handler. messageHandler = filterMsgHandler.getMessageHandler(); } else { // pass subscription preferences to message filter if (logger.isDebugEnabled()) { logger.debug("Start delivering messages with filter on {}, preferences: {}", va(topicSubscriber, SubscriptionStateUtils.toString(preferences))); } ClientMessageFilter msgFilter = filterMsgHandler.getMessageFilter(); msgFilter.setSubscriptionPreferences(topicSubscriber.getTopic(), topicSubscriber.getSubscriberId(), preferences); } } } this.msgHandler = messageHandler; // Once the MessageHandler is registered, see if we have any queued up // subscription messages sent to us already from the server. If so, // consume those first. Do this only if the MessageHandler registered is // not null (since that would be the HedwigSubscriber.stopDelivery // call). if (null == msgHandler) { return; } if (msgQueue.size() > 0) { if (logger.isDebugEnabled()) { logger.debug("Consuming {} queued up messages for {}", va(msgQueue.size(), topicSubscriber)); } for (Message message : msgQueue) { asyncMessageDeliver(message); } // Now we can remove the queued up messages since they are all // consumed. msgQueue.clear(); } } /** * Stop delivering messages to the subscriber. */ public synchronized void stopDelivery() { this.msgHandler = null; } /** * Handle received message. * * @param message * Received message. */ public synchronized void handleMessage(Message message) { if (null != msgHandler) { asyncMessageDeliver(message); } else { // MessageHandler has not yet been registered so queue up these // messages for the Topic Subscription. Make the initial lazy // creation of the message queue thread safe just so we don't // run into a race condition where two simultaneous threads process // a received message and both try to create a new instance of // the message queue. Performance overhead should be okay // because the delivery of the topic has not even started yet // so these messages are not consumed and just buffered up here. if (logger.isDebugEnabled()) { logger.debug("Message {} has arrived but no MessageHandler provided for {}" + " yet so queueing up the message.", va(MessageIdUtils.msgIdToReadableString(message.getMsgId()), topicSubscriber)); } msgQueue.add(message); } } /** * Deliver message to the client. * * @param message * Message to deliver. */ public synchronized void asyncMessageDeliver(Message message) { if (null == msgHandler) { logger.error("No message handler found to deliver message {} to {}.", va(MessageIdUtils.msgIdToReadableString(message.getMsgId()), topicSubscriber)); return; } if (logger.isDebugEnabled()) { logger.debug("Call the client app's MessageHandler asynchronously to deliver the message {} to {}", va(message, topicSubscriber)); } unsafeDeliverMessage(message); } /** * Unsafe version to deliver message to a message handler. * Caller need to handle synchronization issue. * * @param message * Message to deliver. */ protected void unsafeDeliverMessage(Message message) { MessageConsumeData messageConsumeData = new MessageConsumeData(topicSubscriber, message); msgHandler.deliver(topicSubscriber.getTopic(), topicSubscriber.getSubscriberId(), message, channelManager.getConsumeCallback(), messageConsumeData); } private synchronized boolean updateLastMessageSeqId(MessageSeqId seqId) { if (null != lastMessageSeqId && seqId.getLocalComponent() <= lastMessageSeqId.getLocalComponent()) { return false; } ++numConsumedMessagesInBuffer; lastMessageSeqId = seqId; if (numConsumedMessagesInBuffer >= cfg.getConsumedMessagesBufferSize()) { numConsumedMessagesInBuffer = 0; lastMessageSeqId = null; return true; } return false; } /** * Consume a specific message. * * @param messageSeqId * Message seq id. */ public void consume(final MessageSeqId messageSeqId) { PubSubRequest.Builder pubsubRequestBuilder = NetUtils.buildConsumeRequest(channelManager.nextTxnId(), topicSubscriber, messageSeqId); // For Consume requests, we will send them from the client in a fire and // forget manner. We are not expecting the server to send back an ack // response so no need to register this in the ResponseHandler. There // are no callbacks to invoke since this isn't a client initiated // action. Instead, just have a future listener that will log an error // message if there was a problem writing the consume request. if (logger.isDebugEnabled()) { logger.debug("Writing a Consume request to channel: {} with messageSeqId: {} for {}", va(channel, messageSeqId, topicSubscriber)); } ChannelFuture future = channel.write(pubsubRequestBuilder.build()); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { logger.error("Error writing a Consume request to channel: {} with messageSeqId: {} for {}", va(channel, messageSeqId, topicSubscriber)); } } }); } /** * Application acked to consume message. * * @param message * Message consumed by application. */ public void messageConsumed(Message message) { // For consume response to server, there is a config param on how many // messages to consume and buffer up before sending the consume request. // We just need to keep a count of the number of messages consumed // and the largest/latest msg ID seen so far in this batch. Messages // should be delivered in order and without gaps. Do this only if // auto-sending of consume messages is enabled. if (cfg.isAutoSendConsumeMessageEnabled()) { // Update these variables only if we are auto-sending consume // messages to the server. Otherwise the onus is on the client app // to call the Subscriber consume API to let the server know which // messages it has successfully consumed. if (updateLastMessageSeqId(message.getMsgId())) { // Send the consume request and reset the consumed messages buffer // variables. We will use the same Channel created from the // subscribe request for the TopicSubscriber. if (logger.isDebugEnabled()) { logger.debug("Consume message {} when reaching consumed message buffer limit.", message.getMsgId()); } consume(message.getMsgId()); } } } /** * Resubscribe a subscriber if necessary. * * @param event * Subscription Event. */ public void resubscribeIfNecessary(SubscriptionEvent event) { // clear topic ownership if (SubscriptionEvent.TOPIC_MOVED == event) { channelManager.clearHostForTopic(topicSubscriber.getTopic(), NetUtils.getHostFromChannel(channel)); } if (!op.options.getEnableResubscribe()) { channelManager.getSubscriptionEventEmitter().emitSubscriptionEvent( topicSubscriber.getTopic(), topicSubscriber.getSubscriberId(), event); return; } // Since the connection to the server host that was responsible // for the topic died, we are not sure about the state of that // server. Resend the original subscribe request data to the default // server host/VIP. Also clear out all of the servers we've // contacted or attempted to from this request as we are starting a // "fresh" subscribe request. op.clearServersList(); // Set a new type of VoidCallback for this async call. We need this // hook so after the resubscribe has completed, delivery for // that topic subscriber should also be restarted (if it was that // case before the channel disconnect). final long retryWaitTime = cfg.getSubscribeReconnectRetryWaitTime(); ResubscribeCallback resubscribeCb = new ResubscribeCallback(topicSubscriber, op, channelManager, retryWaitTime); op.setCallback(resubscribeCb); op.shouldClaim = false; op.context = null; op.setOriginalChannelForResubscribe(hChannel); if (logger.isDebugEnabled()) { logger.debug("Resubscribe {} with origSubData {}", va(topicSubscriber, op)); } // resubmit the request channelManager.submitOp(op); } } ClientChannelPipelineFactory.java000066400000000000000000000061361244507361200376300ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import java.util.Map; import org.jboss.netty.channel.ChannelPipeline; import org.jboss.netty.channel.ChannelPipelineFactory; import org.jboss.netty.channel.Channels; import org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder; import org.jboss.netty.handler.codec.frame.LengthFieldPrepender; import org.jboss.netty.handler.codec.protobuf.ProtobufDecoder; import org.jboss.netty.handler.codec.protobuf.ProtobufEncoder; import org.jboss.netty.handler.ssl.SslHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.handlers.AbstractResponseHandler; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; public abstract class ClientChannelPipelineFactory implements ChannelPipelineFactory { protected ClientConfiguration cfg; protected AbstractHChannelManager channelManager; public ClientChannelPipelineFactory(ClientConfiguration cfg, AbstractHChannelManager channelManager) { this.cfg = cfg; this.channelManager = channelManager; } protected abstract Map createResponseHandlers(); private HChannelHandler createHChannelHandler() { return new HChannelHandler(cfg, channelManager, createResponseHandlers()); } // Retrieve a ChannelPipeline from the factory. public ChannelPipeline getPipeline() throws Exception { // Create a new ChannelPipline using the factory method from the // Channels helper class. ChannelPipeline pipeline = Channels.pipeline(); if (channelManager.getSslFactory() != null) { pipeline.addLast("ssl", new SslHandler(channelManager.getSslFactory().getEngine())); } pipeline.addLast("lengthbaseddecoder", new LengthFieldBasedFrameDecoder( cfg.getMaximumMessageSize(), 0, 4, 0, 4)); pipeline.addLast("lengthprepender", new LengthFieldPrepender(4)); pipeline.addLast("protobufdecoder", new ProtobufDecoder(PubSubProtocol.PubSubResponse.getDefaultInstance())); pipeline.addLast("protobufencoder", new ProtobufEncoder()); pipeline.addLast("responsehandler", createHChannelHandler()); return pipeline; } } DefaultServerChannel.java000066400000000000000000000073041244507361200361450ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import java.net.InetSocketAddress; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import static org.apache.hedwig.util.VarArgs.va; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Handle requests sent to default hub server. DefaultServerChannel would not * be used as a channel to send requests directly. It just takes the responsibility to * connect to the default server. After the underlying netty channel is established, * it would call {@link HChannelManager#submitOpThruChannel()} to send requests thru * the underlying netty channel. */ class DefaultServerChannel extends HChannelImpl { private static Logger logger = LoggerFactory.getLogger(DefaultServerChannel.class); DefaultServerChannel(InetSocketAddress host, AbstractHChannelManager channelManager) { super(host, channelManager); } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("[DefaultServer: ").append(host).append("]"); return sb.toString(); } @Override public void submitOp(final PubSubData pubSubData) { // for each pub/sub request sent to default hub server // we would establish a fresh connection for it ClientChannelPipelineFactory pipelineFactory; if (OperationType.PUBLISH.equals(pubSubData.operationType) || OperationType.UNSUBSCRIBE.equals(pubSubData.operationType)) { pipelineFactory = channelManager.getNonSubscriptionChannelPipelineFactory(); } else { pipelineFactory = channelManager.getSubscriptionChannelPipelineFactory(); } ChannelFuture future = connect(host, pipelineFactory); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { // If the channel has been closed, there is no need to proceed with any callback // logic here. if (closed) { future.getChannel().close(); return; } // Check if the connection to the server was done successfully. if (!future.isSuccess()) { logger.error("Error connecting to host {}.", host); future.getChannel().close(); retryOrFailOp(pubSubData); // Finished with failure logic so just return. return; } logger.debug("Connected to host {} for pubSubData: {}", va(host, pubSubData)); channelManager.submitOpThruChannel(pubSubData, future.getChannel()); } }); } } HChannelHandler.java000066400000000000000000000331121244507361200350530ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import java.net.InetSocketAddress; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelHandlerContext; import org.jboss.netty.channel.ChannelPipelineCoverage; import org.jboss.netty.channel.ChannelStateEvent; import org.jboss.netty.channel.ExceptionEvent; import org.jboss.netty.channel.MessageEvent; import org.jboss.netty.channel.SimpleChannelHandler; import org.jboss.netty.handler.ssl.SslHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.exceptions.NoResponseHandlerException; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.client.handlers.AbstractResponseHandler; import org.apache.hedwig.client.handlers.SubscribeResponseHandler; import org.apache.hedwig.exceptions.PubSubException.UncertainStateException; import org.apache.hedwig.exceptions.PubSubException.UnexpectedConditionException; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEventResponse; import static org.apache.hedwig.util.VarArgs.va; @ChannelPipelineCoverage("all") public class HChannelHandler extends SimpleChannelHandler { private static Logger logger = LoggerFactory.getLogger(HChannelHandler.class); // Concurrent Map to store for each async PubSub request, the txn ID // and the corresponding PubSub call's data which stores the VoidCallback to // invoke when we receive a PubSub ack response from the server. // This is specific to this instance of the HChannelHandler which is // tied to a specific netty Channel Pipeline. private final ConcurrentMap txn2PubSubData = new ConcurrentHashMap(); // Boolean indicating if we closed the channel this HChannelHandler is // attached to explicitly or not. If so, we do not need to do the // channel disconnected logic here. private volatile boolean channelClosedExplicitly = false; private final AbstractHChannelManager channelManager; private final ClientConfiguration cfg; private final Map handlers; private final SubscribeResponseHandler subHandler; public HChannelHandler(ClientConfiguration cfg, AbstractHChannelManager channelManager, Map handlers) { this.cfg = cfg; this.channelManager = channelManager; this.handlers = handlers; subHandler = (SubscribeResponseHandler) handlers.get(OperationType.SUBSCRIBE); } public SubscribeResponseHandler getSubscribeResponseHandler() { return subHandler; } public void removeTxn(long txnId) { txn2PubSubData.remove(txnId); } public void addTxn(long txnId, PubSubData pubSubData) { txn2PubSubData.put(txnId, pubSubData); } @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception { // If the Message is not a PubSubResponse, just send it upstream and let // something else handle it. if (!(e.getMessage() instanceof PubSubResponse)) { ctx.sendUpstream(e); return; } // Retrieve the PubSubResponse from the Message that was sent by the // server. PubSubResponse response = (PubSubResponse) e.getMessage(); logger.debug("Response received from host: {}, response: {}.", va(NetUtils.getHostFromChannel(ctx.getChannel()), response)); // Determine if this PubSubResponse is an ack response for a PubSub // Request or if it is a message being pushed to the client subscriber. if (response.hasMessage()) { // Subscribed messages being pushed to the client so handle/consume // it and return. if (null == subHandler) { logger.error("Received message from a non-subscription channel : {}", response); } else { subHandler.handleSubscribeMessage(response); } return; } // Process Subscription Events if (response.hasResponseBody()) { ResponseBody resp = response.getResponseBody(); // A special subscription event indicates the state of a subscriber if (resp.hasSubscriptionEvent()) { if (null == subHandler) { logger.error("Received subscription event from a non-subscription channel : {}", response); } else { SubscriptionEventResponse eventResp = resp.getSubscriptionEvent(); logger.debug("Received subscription event {} for (topic:{}, subscriber:{}).", va(eventResp.getEvent(), response.getTopic(), response.getSubscriberId())); subHandler.handleSubscriptionEvent(response.getTopic(), response.getSubscriberId(), eventResp.getEvent()); } return; } } // Response is an ack to a prior PubSubRequest so first retrieve the // PubSub data for this txn. PubSubData pubSubData = txn2PubSubData.remove(response.getTxnId()); // Validate that the PubSub data for this txn is stored. If not, just // log an error message and return since we don't know how to handle // this. if (pubSubData == null) { logger.error("PubSub Data was not found for PubSubResponse: {}", response); return; } // Store the topic2Host mapping if this wasn't a server redirect. We'll // assume that if the server was able to have an open Channel connection // to the client, and responded with an ack message other than the // NOT_RESPONSIBLE_FOR_TOPIC one, it is the correct topic master. if (!response.getStatusCode().equals(StatusCode.NOT_RESPONSIBLE_FOR_TOPIC)) { // Retrieve the server host that we've connected to and store the // mapping from the topic to this host. For all other non-redirected // server statuses, we consider that as a successful connection to the // correct topic master. InetSocketAddress host = NetUtils.getHostFromChannel(ctx.getChannel()); channelManager.storeTopic2HostMapping(pubSubData.topic, host); } // Depending on the operation type, call the appropriate handler. logger.debug("Handling a {} response: {}, pubSubData: {}, host: {}.", va(pubSubData.operationType, response, pubSubData, ctx.getChannel())); AbstractResponseHandler respHandler = handlers.get(pubSubData.operationType); if (null == respHandler) { // The above are the only expected PubSubResponse messages received // from the server for the various client side requests made. logger.error("Response received from server is for an unhandled operation {}, txnId: {}.", va(pubSubData.operationType, response.getTxnId())); pubSubData.getCallback().operationFailed(pubSubData.context, new UnexpectedConditionException("Can't find response handler for operation " + pubSubData.operationType)); return; } respHandler.handleResponse(response, pubSubData, ctx.getChannel()); } public void checkTimeoutRequests() { long curTime = System.currentTimeMillis(); long timeoutInterval = cfg.getServerAckResponseTimeout(); for (PubSubData pubSubData : txn2PubSubData.values()) { checkTimeoutRequest(pubSubData, curTime, timeoutInterval); } } private void checkTimeoutRequest(PubSubData pubSubData, long curTime, long timeoutInterval) { if (curTime > pubSubData.requestWriteTime + timeoutInterval) { // Current PubSubRequest has timed out so remove it from the // ResponseHandler's map and invoke the VoidCallback's // operationFailed method. logger.error("Current PubSubRequest has timed out for pubSubData: " + pubSubData); txn2PubSubData.remove(pubSubData.txnId); pubSubData.getCallback().operationFailed(pubSubData.context, new UncertainStateException("Server ack response never received so PubSubRequest has timed out!")); } } // Logic to deal with what happens when a Channel to a server host is // disconnected. @Override public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception { // If this channel was closed explicitly by the client code, // we do not need to do any of this logic. This could happen // for redundant Publish channels created or redirected subscribe // channels that are not used anymore or when we shutdown the // client and manually close all of the open channels. // Also don't do any of the disconnect logic if the client has stopped. if (channelClosedExplicitly || channelManager.isClosed()) { return; } // Make sure the host retrieved is not null as there could be some weird // channel disconnect events happening during a client shutdown. // If it is, just return as there shouldn't be anything we need to do. InetSocketAddress host = NetUtils.getHostFromChannel(ctx.getChannel()); if (host == null) { return; } logger.info("Channel {} was disconnected to host {}.", va(ctx.getChannel(), host)); // If this Channel was used for Publish and Unsubscribe flows, just // remove it from the HewdigPublisher's host2Channel map. We will // re-establish a Channel connection to that server when the next // publish/unsubscribe request to a topic that the server owns occurs. // Now determine what type of operation this channel was used for. if (null == subHandler) { channelManager.onNonSubscriptionChannelDisconnected(host, ctx.getChannel()); } else { channelManager.onSubscriptionChannelDisconnected(host, ctx.getChannel()); } // Finally, all of the PubSubRequests that are still waiting for an ack // response from the server need to be removed and timed out. Invoke the // operationFailed callbacks on all of them. Use the // UncertainStateException since the server did receive the request but // we're not sure of the state of the request since the ack response was // never received. for (PubSubData pubSubData : txn2PubSubData.values()) { logger.debug("Channel disconnected so invoking the operationFailed callback for pubSubData: {}", pubSubData); pubSubData.getCallback().operationFailed(pubSubData.context, new UncertainStateException( "Server ack response never received before server connection disconnected!")); } txn2PubSubData.clear(); } // Logic to deal with what happens when a Channel to a server host is // connected. This is needed if the client is using an SSL port to // communicate with the server. If so, we need to do the SSL handshake here // when the channel is first connected. @Override public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception { // No need to initiate the SSL handshake if we are closing this channel // explicitly or the client has been stopped. if (cfg.isSSLEnabled() && !channelClosedExplicitly && !channelManager.isClosed()) { logger.debug("Initiating the SSL handshake"); ctx.getPipeline().get(SslHandler.class).handshake(e.getChannel()); } } @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) { logger.error("Exception caught on client channel", e.getCause()); e.getChannel().close(); } public void closeExplicitly() { // TODO: BOOKKEEPER-350 : Handle consume buffering, etc here - in a different patch channelClosedExplicitly = true; } } HChannelImpl.java000066400000000000000000000334551244507361200344110ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import java.net.InetSocketAddress; import java.util.ArrayDeque; import java.util.LinkedList; import java.util.Queue; import com.google.protobuf.ByteString; import org.jboss.netty.bootstrap.ClientBootstrap; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.exceptions.NoResponseHandlerException; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.exceptions.PubSubException.CouldNotConnectException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.util.HedwigSocketAddress; import static org.apache.hedwig.util.VarArgs.va; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Provide a wrapper over netty channel for Hedwig operations. */ public class HChannelImpl implements HChannel { private static Logger logger = LoggerFactory.getLogger(HChannelImpl.class); enum State { DISCONNECTED, CONNECTING, CONNECTED, }; InetSocketAddress host; final AbstractHChannelManager channelManager; final ClientChannelPipelineFactory pipelineFactory; volatile Channel channel; volatile State state; // Indicates whether the channel is closed or not. volatile boolean closed = false; // Queue the pubsub requests when the channel is not connected. Queue pendingOps = new ArrayDeque(); /** * Create a un-established channel with provided target host. * * @param host * Target host address. * @param channelManager * Channel manager manages the channels. */ protected HChannelImpl(InetSocketAddress host, AbstractHChannelManager channelManager) { this(host, channelManager, null); } public HChannelImpl(InetSocketAddress host, AbstractHChannelManager channelManager, ClientChannelPipelineFactory pipelineFactory) { this(host, null, channelManager, pipelineFactory); state = State.DISCONNECTED; } /** * Create a HChannel with an established netty channel. * * @param host * Target host address. * @param channel * Established Netty channel. * @param channelManager * Channel manager manages the channels. */ public HChannelImpl(InetSocketAddress host, Channel channel, AbstractHChannelManager channelManager, ClientChannelPipelineFactory pipelineFactory) { this.host = host; this.channel = channel; this.channelManager = channelManager; this.pipelineFactory = pipelineFactory; state = State.CONNECTED; } @Override public void submitOp(PubSubData pubSubData) { boolean doOpNow = false; // common case without lock first if (null != channel && State.CONNECTED == state) { doOpNow = true; } else { synchronized (this) { // check channel & state again under lock if (null != channel && State.CONNECTED == state) { doOpNow = true; } else { // if reached here, channel is either null (first connection attempt), // or the channel is disconnected. Connection attempt is still in progress, // queue up this op. Op will be executed when connection attempt either // fails or succeeds pendingOps.add(pubSubData); } } if (!doOpNow) { // start connection attempt to server connect(); } } if (doOpNow) { executeOpAfterConnected(pubSubData); } } /** * Execute pub/sub operation after the underlying channel is connected. * * @param pubSubData * Pub/Sub Operation */ private void executeOpAfterConnected(PubSubData pubSubData) { PubSubRequest.Builder reqBuilder = NetUtils.buildPubSubRequest(channelManager.nextTxnId(), pubSubData); writePubSubRequest(pubSubData, reqBuilder.build()); } @Override public Channel getChannel() { return channel; } private void writePubSubRequest(PubSubData pubSubData, PubSubRequest pubSubRequest) { if (closed || null == channel || State.CONNECTED != state) { retryOrFailOp(pubSubData); return; } // Before we do the write, store this information into the // ResponseHandler so when the server responds, we know what // appropriate Callback Data to invoke for the given txn ID. try { getHChannelHandlerFromChannel(channel) .addTxn(pubSubData.txnId, pubSubData); } catch (NoResponseHandlerException nrhe) { logger.warn("No Channel Handler found for channel {} when writing request." + " It might already disconnect.", channel); return; } // Finally, write the pub/sub request through the Channel. logger.debug("Writing a {} request to host: {} for pubSubData: {}.", va(pubSubData.operationType, host, pubSubData)); ChannelFuture future = channel.write(pubSubRequest); future.addListener(new WriteCallback(pubSubData, channelManager)); } /** * Re-submit operation to default server or fail it. * * @param pubSubData * Pub/Sub Operation */ protected void retryOrFailOp(PubSubData pubSubData) { // if we were not able to connect to the host, it could be down ByteString hostString = ByteString.copyFromUtf8(HedwigSocketAddress.sockAddrStr(host)); if (pubSubData.connectFailedServers != null && pubSubData.connectFailedServers.contains(hostString)) { // We've already tried to connect to this host before so just // invoke the operationFailed callback. logger.error("Error connecting to host {} more than once so fail the request: {}", va(host, pubSubData)); pubSubData.getCallback().operationFailed(pubSubData.context, new CouldNotConnectException("Could not connect to host: " + host)); } else { logger.error("Retry to connect to default hub server again for pubSubData: {}", pubSubData); // Keep track of this current server that we failed to connect // to but retry the request on the default server host/VIP. if (pubSubData.connectFailedServers == null) { pubSubData.connectFailedServers = new LinkedList(); } pubSubData.connectFailedServers.add(hostString); channelManager.submitOpToDefaultServer(pubSubData); } } private void onChannelConnected(ChannelFuture future) { Queue oldPendingOps; synchronized (this) { // if the channel is closed by client, do nothing if (closed) { future.getChannel().close(); return; } state = State.CONNECTED; channel = future.getChannel(); host = NetUtils.getHostFromChannel(channel); oldPendingOps = pendingOps; pendingOps = new ArrayDeque(); } for (PubSubData op : oldPendingOps) { executeOpAfterConnected(op); } } private void onChannelConnectFailure() { Queue oldPendingOps; synchronized (this) { state = State.DISCONNECTED; channel = null; oldPendingOps = pendingOps; pendingOps = new ArrayDeque(); } for (PubSubData op : oldPendingOps) { retryOrFailOp(op); } } private void connect() { synchronized (this) { if (State.CONNECTING == state || State.CONNECTED == state) { return; } state = State.CONNECTING; } // Start the connection attempt to the input server host. ChannelFuture future = connect(host, pipelineFactory); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { // If the channel has been closed, there is no need to proceed with any // callback logic here. if (closed) { future.getChannel().close(); return; } if (!future.isSuccess()) { logger.error("Error connecting to host {}.", host); future.getChannel().close(); // if we were not able to connect to the host, it could be down. onChannelConnectFailure(); return; } logger.debug("Connected to server {}.", host); // Now that we have connected successfully to the server, execute all queueing // requests. onChannelConnected(future); } }); } /** * This is a helper method to do the connect attempt to the server given the * inputted host/port. This can be used to connect to the default server * host/port which is the VIP. That will pick a server in the cluster at * random to connect to for the initial PubSub attempt (with redirect logic * being done at the server side). Additionally, this could be called after * the client makes an initial PubSub attempt at a server, and is redirected * to the one that is responsible for the topic. Once the connect to the * server is done, we will perform the corresponding PubSub write on that * channel. * * @param serverHost * Input server host to connect to of type InetSocketAddress * @param pipelineFactory * PipelineFactory to create response handler to handle responses from * underlying channel. */ protected ChannelFuture connect(InetSocketAddress serverHost, ClientChannelPipelineFactory pipelineFactory) { logger.debug("Connecting to host {} ...", serverHost); // Set up the ClientBootStrap so we can create a new Channel connection // to the server. ClientBootstrap bootstrap = new ClientBootstrap(channelManager.getChannelFactory()); bootstrap.setPipelineFactory(pipelineFactory); bootstrap.setOption("tcpNoDelay", true); bootstrap.setOption("keepAlive", true); // Start the connection attempt to the input server host. return bootstrap.connect(serverHost); } @Override public void close(boolean wait) { synchronized (this) { if (closed) { return; } closed = true; } if (null == channel) { return; } try { getHChannelHandlerFromChannel(channel).closeExplicitly(); } catch (NoResponseHandlerException nrhe) { logger.warn("No channel handler found for channel {} when closing it.", channel); } if (wait) { channel.close().awaitUninterruptibly(); } else { channel.close(); } channel = null; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("[HChannel: host - ").append(host) .append(", channel - ").append(channel) .append(", pending reqs - ").append(pendingOps.size()) .append(", closed - ").append(closed).append("]"); return sb.toString(); } @Override public void close() { close(false); } /** * Helper static method to get the ResponseHandler instance from a Channel * via the ChannelPipeline it is associated with. The assumption is that the * last ChannelHandler tied to the ChannelPipeline is the ResponseHandler. * * @param channel * Channel we are retrieving the ResponseHandler instance for * @return ResponseHandler Instance tied to the Channel's Pipeline */ public static HChannelHandler getHChannelHandlerFromChannel(Channel channel) throws NoResponseHandlerException { if (null == channel) { throw new NoResponseHandlerException("Received a null value for the channel. Cannot retrieve the response handler"); } HChannelHandler handler = (HChannelHandler) channel.getPipeline().getLast(); if (null == handler) { throw new NoResponseHandlerException("Could not retrieve the response handler from the channel's pipeline."); } return handler; } } NonSubscriptionChannelPipelineFactory.java000066400000000000000000000037511244507361200415510ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import java.util.HashMap; import java.util.Map; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.handlers.AbstractResponseHandler; import org.apache.hedwig.client.handlers.PublishResponseHandler; import org.apache.hedwig.client.handlers.UnsubscribeResponseHandler; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; public class NonSubscriptionChannelPipelineFactory extends ClientChannelPipelineFactory { public NonSubscriptionChannelPipelineFactory(ClientConfiguration cfg, AbstractHChannelManager channelManager) { super(cfg, channelManager); } @Override protected Map createResponseHandlers() { Map handlers = new HashMap(); handlers.put(OperationType.PUBLISH, new PublishResponseHandler(cfg, channelManager)); handlers.put(OperationType.UNSUBSCRIBE, new UnsubscribeResponseHandler(cfg, channelManager)); return handlers; } } ResubscribeCallback.java000066400000000000000000000114331244507361200357640ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.ResubscribeException; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; /** * This class is used when a Subscribe channel gets disconnected and we attempt * to resubmit subscribe requests existed in that channel. Once the resubscribe * the topic is completed, we need to restart delivery for that topic. */ class ResubscribeCallback implements Callback { private static Logger logger = LoggerFactory.getLogger(ResubscribeCallback.class); // Private member variables private final TopicSubscriber origTopicSubscriber; private final PubSubData origSubData; private final AbstractHChannelManager channelManager; private final long retryWaitTime; // Constructor ResubscribeCallback(TopicSubscriber origTopicSubscriber, PubSubData origSubData, AbstractHChannelManager channelManager, long retryWaitTime) { this.origTopicSubscriber = origTopicSubscriber; this.origSubData = origSubData; this.channelManager = channelManager; this.retryWaitTime = retryWaitTime; } @Override public void operationFinished(Object ctx, ResponseBody resultOfOperation) { if (logger.isDebugEnabled()) logger.debug("Resubscribe succeeded for origSubData: " + origSubData); // Now we want to restart delivery for the subscription channel only // if delivery was started at the time the original subscribe channel // was disconnected. try { channelManager.restartDelivery(origTopicSubscriber); } catch (ClientNotSubscribedException e) { // This exception should never be thrown here but just in case, // log an error and just keep retrying the subscribe request. logger.error("Subscribe was successful but error starting delivery for {} : {}", va(origTopicSubscriber, e.getMessage())); retrySubscribeRequest(); } catch (AlreadyStartDeliveryException asde) { // should not reach here } } @Override public void operationFailed(Object ctx, PubSubException exception) { if (exception instanceof ResubscribeException) { // it might be caused by closesub when resubscribing. // so we don't need to retry resubscribe again logger.warn("Failed to resubscribe {} : but it is caused by closesub when resubscribing. " + "so we don't need to retry subscribe again.", origSubData); } // If the resubscribe fails, just keep retrying the subscribe // request. There isn't a way to flag to the application layer that // a topic subscription has failed. So instead, we'll just keep // retrying in the background until success. logger.error("Resubscribe failed with error: " + exception.getMessage()); // we don't retry subscribe request is channel manager is closing // otherwise it might overflow the stack. if (!channelManager.isClosed()) { retrySubscribeRequest(); } } private void retrySubscribeRequest() { if (channelManager.isClosed()) { return; } origSubData.clearServersList(); logger.debug("Resubmit subscribe request for {} in {} ms later.", va(origTopicSubscriber, retryWaitTime)); channelManager.submitOpAfterDelay(origSubData, retryWaitTime); } } WriteCallback.java000066400000000000000000000125671244507361200346170ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl; import java.net.InetSocketAddress; import java.util.LinkedList; import org.apache.hedwig.client.exceptions.NoResponseHandlerException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import com.google.protobuf.ByteString; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.util.HedwigSocketAddress; public class WriteCallback implements ChannelFutureListener { private static Logger logger = LoggerFactory.getLogger(WriteCallback.class); // Private member variables private PubSubData pubSubData; private final HChannelManager channelManager; // Constructor public WriteCallback(PubSubData pubSubData, HChannelManager channelManager) { super(); this.pubSubData = pubSubData; this.channelManager = channelManager; } public void operationComplete(ChannelFuture future) throws Exception { // If the client has stopped, there is no need to proceed // with any callback logic here. if (channelManager.isClosed()) { future.getChannel().close(); return; } // When the write operation to the server is done, we just need to check // if it was successful or not. InetSocketAddress host = NetUtils.getHostFromChannel(future.getChannel()); if (!future.isSuccess()) { logger.error("Error writing on channel to host: {}", host); // On a write failure for a PubSubRequest, we also want to remove // the saved txnId to PubSubData in the ResponseHandler. These // requests will not receive an ack response from the server // so there is no point storing that information there anymore. try { HChannelHandler channelHandler = HChannelImpl.getHChannelHandlerFromChannel(future.getChannel()); channelHandler.removeTxn(pubSubData.txnId); channelHandler.closeExplicitly(); } catch (NoResponseHandlerException e) { // We just couldn't remove the transaction ID's mapping. // The handler was null, so this has been reset anyway. logger.warn("Could not find response handler to remove txnId mapping to pubsub data. Ignoring."); } future.getChannel().close(); // If we were not able to write on the channel to the server host, // the host could have died or something is wrong with the channel // connection where we can connect to the host, but not write to it. ByteString hostString = (host == null) ? null : ByteString.copyFromUtf8(HedwigSocketAddress.sockAddrStr(host)); if (pubSubData.writeFailedServers != null && pubSubData.writeFailedServers.contains(hostString)) { // We've already tried to write to this server previously and // failed, so invoke the operationFailed callback. logger.error("Error writing to host more than once so just invoke the operationFailed callback!"); pubSubData.getCallback().operationFailed(pubSubData.context, new ServiceDownException( "Error while writing message to server: " + hostString)); } else { logger.debug("Try to send the PubSubRequest again to the default server host/VIP for pubSubData: {}", pubSubData); // Keep track of this current server that we failed to write to // but retry the request on the default server host/VIP. if (pubSubData.writeFailedServers == null) pubSubData.writeFailedServers = new LinkedList(); pubSubData.writeFailedServers.add(hostString); channelManager.submitOpToDefaultServer(pubSubData); } } else { // Now that the write to the server is done, we have to wait for it // to respond. The ResponseHandler will take care of the ack // response from the server before we can determine if the async // PubSub call has really completed successfully or not. logger.debug("Successfully wrote to host: {} for pubSubData: {}", host, pubSubData); } } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/multiplex/000077500000000000000000000000001244507361200333345ustar00rootroot00000000000000MultiplexHChannelManager.java000066400000000000000000000337151244507361200410100ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/multiplex/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl.multiplex; import java.net.InetSocketAddress; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFactory; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.exceptions.NoResponseHandlerException; import org.apache.hedwig.client.handlers.SubscribeResponseHandler; import org.apache.hedwig.client.netty.CleanupChannelMap; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.client.netty.impl.AbstractHChannelManager; import org.apache.hedwig.client.netty.impl.ClientChannelPipelineFactory; import org.apache.hedwig.client.netty.impl.HChannelHandler; import org.apache.hedwig.client.netty.impl.HChannelImpl; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.Either; import static org.apache.hedwig.util.VarArgs.va; /** * Multiplex HChannel Manager which establish a connection for multi subscriptions. */ public class MultiplexHChannelManager extends AbstractHChannelManager { static final Logger logger = LoggerFactory.getLogger(MultiplexHChannelManager.class); // Find which HChannel that a given TopicSubscriber used. protected final CleanupChannelMap subscriptionChannels; // A index map for each topic subscriber is served by which subscription channel protected final CleanupChannelMap sub2Channels; // Concurrent Map to store Message handler for each topic + sub id combination. // Store it here instead of in SubscriberResponseHandler as we don't want to lose the handler // user set when connection is recovered protected final ConcurrentMap topicSubscriber2MessageHandler = new ConcurrentHashMap(); // PipelineFactory to create subscription netty channels to the appropriate server private final ClientChannelPipelineFactory subscriptionChannelPipelineFactory; public MultiplexHChannelManager(ClientConfiguration cfg, ChannelFactory socketFactory) { super(cfg, socketFactory); subscriptionChannels = new CleanupChannelMap(); sub2Channels = new CleanupChannelMap(); subscriptionChannelPipelineFactory = new MultiplexSubscriptionChannelPipelineFactory(cfg, this); } @Override protected ClientChannelPipelineFactory getSubscriptionChannelPipelineFactory() { return subscriptionChannelPipelineFactory; } @Override protected HChannel createAndStoreSubscriptionChannel(Channel channel) { // store the channel connected to target host for future usage InetSocketAddress host = NetUtils.getHostFromChannel(channel); HChannel newHChannel = new HChannelImpl(host, channel, this, getSubscriptionChannelPipelineFactory()); return storeSubscriptionChannel(host, newHChannel); } @Override protected HChannel createAndStoreSubscriptionChannel(InetSocketAddress host) { HChannel newHChannel = new HChannelImpl(host, this, getSubscriptionChannelPipelineFactory()); return storeSubscriptionChannel(host, newHChannel); } private HChannel storeSubscriptionChannel(InetSocketAddress host, HChannel newHChannel) { // here, we guarantee there is only one channel used to communicate with target // host. return subscriptionChannels.addChannel(host, newHChannel); } @Override protected HChannel getSubscriptionChannel(InetSocketAddress host) { return subscriptionChannels.getChannel(host); } protected HChannel getSubscriptionChannel(TopicSubscriber subscriber) { InetSocketAddress host = topic2Host.get(subscriber.getTopic()); if (null == host) { // we don't know where is the owner of the topic return null; } else { return getSubscriptionChannel(host); } } @Override protected HChannel getSubscriptionChannelByTopicSubscriber(TopicSubscriber subscriber) { InetSocketAddress host = topic2Host.get(subscriber.getTopic()); if (null == host) { // we don't know where is the topic return null; } else { // we had know which server owned the topic HChannel channel = getSubscriptionChannel(host); if (null == channel) { // create a channel to connect to sepcified host channel = createAndStoreSubscriptionChannel(host); } return channel; } } @Override protected void onSubscriptionChannelDisconnected(InetSocketAddress host, Channel channel) { HChannel hChannel = subscriptionChannels.getChannel(host); if (null == hChannel) { return; } Channel underlyingChannel = hChannel.getChannel(); if (null == underlyingChannel || !underlyingChannel.equals(channel)) { return; } logger.info("Subscription Channel {} disconnected from {}.", va(channel, host)); // remove existed channel if (subscriptionChannels.removeChannel(host, hChannel)) { try { HChannelHandler channelHandler = HChannelImpl.getHChannelHandlerFromChannel(channel); channelHandler.getSubscribeResponseHandler() .onChannelDisconnected(host, channel); } catch (NoResponseHandlerException nrhe) { logger.warn("No Channel Handler found for channel {} when it disconnected.", channel); } } } @Override public SubscribeResponseHandler getSubscribeResponseHandler(TopicSubscriber topicSubscriber) { HChannel hChannel = getSubscriptionChannel(topicSubscriber); if (null == hChannel) { return null; } Channel channel = hChannel.getChannel(); if (null == channel) { return null; } try { HChannelHandler channelHandler = HChannelImpl.getHChannelHandlerFromChannel(channel); return channelHandler.getSubscribeResponseHandler(); } catch (NoResponseHandlerException nrhe) { logger.warn("No Channel Handler found for channel {}, topic subscriber {}.", channel, topicSubscriber); return null; } } @Override public void startDelivery(TopicSubscriber topicSubscriber, MessageHandler messageHandler) throws ClientNotSubscribedException, AlreadyStartDeliveryException { startDelivery(topicSubscriber, messageHandler, false); } @Override protected void restartDelivery(TopicSubscriber topicSubscriber) throws ClientNotSubscribedException, AlreadyStartDeliveryException { startDelivery(topicSubscriber, null, true); } private void startDelivery(TopicSubscriber topicSubscriber, MessageHandler messageHandler, boolean restart) throws ClientNotSubscribedException, AlreadyStartDeliveryException { // Make sure we know about this topic subscription on the client side // exists. The assumption is that the client should have in memory the // Channel created for the TopicSubscriber once the server has sent // an ack response to the initial subscribe request. SubscribeResponseHandler subscribeResponseHandler = getSubscribeResponseHandler(topicSubscriber); if (null == subscribeResponseHandler || !subscribeResponseHandler.hasSubscription(topicSubscriber)) { logger.error("Client is not yet subscribed to {}.", topicSubscriber); throw new ClientNotSubscribedException("Client is not yet subscribed to " + topicSubscriber); } MessageHandler existedMsgHandler = topicSubscriber2MessageHandler.get(topicSubscriber); if (restart) { // restart using existing msg handler messageHandler = existedMsgHandler; } else { // some has started delivery but not stop it if (null != existedMsgHandler) { throw new AlreadyStartDeliveryException("A message handler has been started for topic subscriber " + topicSubscriber); } if (messageHandler != null) { if (null != topicSubscriber2MessageHandler.putIfAbsent(topicSubscriber, messageHandler)) { throw new AlreadyStartDeliveryException("Someone is also starting delivery for topic subscriber " + topicSubscriber); } } } // tell subscribe response handler to start delivering messages for topicSubscriber subscribeResponseHandler.startDelivery(topicSubscriber, messageHandler); } public void stopDelivery(TopicSubscriber topicSubscriber) throws ClientNotSubscribedException { // Make sure we know that this topic subscription on the client side // exists. The assumption is that the client should have in memory the // Channel created for the TopicSubscriber once the server has sent // an ack response to the initial subscribe request. SubscribeResponseHandler subscribeResponseHandler = getSubscribeResponseHandler(topicSubscriber); if (null == subscribeResponseHandler || !subscribeResponseHandler.hasSubscription(topicSubscriber)) { logger.error("Client is not yet subscribed to {}.", topicSubscriber); throw new ClientNotSubscribedException("Client is not yet subscribed to " + topicSubscriber); } // tell subscribe response handler to stop delivering messages for a given topic subscriber topicSubscriber2MessageHandler.remove(topicSubscriber); subscribeResponseHandler.stopDelivery(topicSubscriber); } @Override public void asyncCloseSubscription(final TopicSubscriber topicSubscriber, final Callback callback, final Object context) { SubscribeResponseHandler subscribeResponseHandler = getSubscribeResponseHandler(topicSubscriber); if (null == subscribeResponseHandler || !subscribeResponseHandler.hasSubscription(topicSubscriber)) { logger.warn("Trying to close a subscription when we don't have a subscription channel cached for {}", topicSubscriber); callback.operationFinished(context, (ResponseBody)null); return; } subscribeResponseHandler.asyncCloseSubscription(topicSubscriber, callback, context); } @Override protected void checkTimeoutRequestsOnSubscriptionChannels() { // timeout task may be started before constructing subscriptionChannels if (null == subscriptionChannels) { return; } for (HChannel channel : subscriptionChannels.getChannels()) { try { HChannelHandler channelHandler = HChannelImpl.getHChannelHandlerFromChannel(channel.getChannel()); channelHandler.checkTimeoutRequests(); } catch (NoResponseHandlerException nrhe) { continue; } } } @Override protected void closeSubscriptionChannels() { subscriptionChannels.close(); } protected Either storeSubscriptionChannel( TopicSubscriber topicSubscriber, PubSubData txn, HChannel channel) { boolean replaced = sub2Channels.replaceChannel( topicSubscriber, txn.getOriginalChannelForResubscribe(), channel); if (replaced) { return Either.of(replaced, channel); } else { return Either.of(replaced, null); } } protected boolean removeSubscriptionChannel( TopicSubscriber topicSubscriber, HChannel channel) { return sub2Channels.removeChannel(topicSubscriber, channel); } } MultiplexSubscribeResponseHandler.java000066400000000000000000000132701244507361200427650ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/multiplex/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl.multiplex; import java.net.InetSocketAddress; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.client.netty.impl.AbstractSubscribeResponseHandler; import org.apache.hedwig.client.netty.impl.ActiveSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.UnexpectedConditionException; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.Either; import static org.apache.hedwig.util.VarArgs.va; public class MultiplexSubscribeResponseHandler extends AbstractSubscribeResponseHandler { private static Logger logger = LoggerFactory.getLogger(MultiplexSubscribeResponseHandler.class); // the underlying subscription channel volatile HChannel hChannel; private final MultiplexHChannelManager sChannelManager; protected MultiplexSubscribeResponseHandler(ClientConfiguration cfg, HChannelManager channelManager) { super(cfg, channelManager); sChannelManager = (MultiplexHChannelManager) channelManager; } @Override public void handleResponse(PubSubResponse response, PubSubData pubSubData, Channel channel) throws Exception { if (null == hChannel) { InetSocketAddress host = NetUtils.getHostFromChannel(channel); hChannel = sChannelManager.getSubscriptionChannel(host); if (null == hChannel || !channel.equals(hChannel.getChannel())) { PubSubException pse = new UnexpectedConditionException("Failed to get subscription channel of " + host); pubSubData.getCallback().operationFailed(pubSubData.context, pse); return; } } super.handleResponse(response, pubSubData, channel); } @Override protected Either handleSuccessResponse( TopicSubscriber ts, PubSubData pubSubData, Channel channel) { // Store the mapping for the TopicSubscriber to the Channel. // This is so we can control the starting and stopping of // message deliveries from the server on that Channel. Store // this only on a successful ack response from the server. Either result = sChannelManager.storeSubscriptionChannel(ts, pubSubData, hChannel); if (result.left()) { return Either.of(StatusCode.SUCCESS, result.right()); } else { StatusCode code; if (pubSubData.isResubscribeRequest()) { code = StatusCode.RESUBSCRIBE_EXCEPTION; } else { code = StatusCode.CLIENT_ALREADY_SUBSCRIBED; } return Either.of(code, null); } } @Override public void asyncCloseSubscription(final TopicSubscriber topicSubscriber, final Callback callback, final Object context) { final ActiveSubscriber ss = getActiveSubscriber(topicSubscriber); if (null == ss || null == hChannel) { logger.debug("No subscription {} found when closing its subscription from {}.", va(topicSubscriber, hChannel)); callback.operationFinished(context, (ResponseBody)null); return; } Callback closeCb = new Callback() { @Override public void operationFinished(Object ctx, ResponseBody respBody) { removeSubscription(topicSubscriber, ss); sChannelManager.removeSubscriptionChannel(topicSubscriber, hChannel); callback.operationFinished(context, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { callback.operationFailed(context, exception); } }; PubSubData closeOp = new PubSubData(topicSubscriber.getTopic(), null, topicSubscriber.getSubscriberId(), OperationType.CLOSESUBSCRIPTION, null, closeCb, context); hChannel.submitOp(closeOp); } } MultiplexSubscriptionChannelPipelineFactory.java000066400000000000000000000042551244507361200450250ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/multiplex/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl.multiplex; import java.util.HashMap; import java.util.Map; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.handlers.AbstractResponseHandler; import org.apache.hedwig.client.handlers.CloseSubscriptionResponseHandler; import org.apache.hedwig.client.netty.impl.AbstractHChannelManager; import org.apache.hedwig.client.netty.impl.ClientChannelPipelineFactory; import org.apache.hedwig.client.netty.impl.HChannelHandler; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; public class MultiplexSubscriptionChannelPipelineFactory extends ClientChannelPipelineFactory { public MultiplexSubscriptionChannelPipelineFactory(ClientConfiguration cfg, MultiplexHChannelManager channelManager) { super(cfg, channelManager); } @Override protected Map createResponseHandlers() { Map handlers = new HashMap(); handlers.put(OperationType.SUBSCRIBE, new MultiplexSubscribeResponseHandler(cfg, channelManager)); handlers.put(OperationType.CLOSESUBSCRIPTION, new CloseSubscriptionResponseHandler(cfg, channelManager)); return handlers; } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/simple/000077500000000000000000000000001244507361200326025ustar00rootroot00000000000000SimpleHChannelManager.java000066400000000000000000000402401244507361200375130ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/simple/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl.simple; import java.net.InetSocketAddress; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFactory; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.exceptions.NoResponseHandlerException; import org.apache.hedwig.client.handlers.SubscribeResponseHandler; import org.apache.hedwig.client.netty.CleanupChannelMap; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.client.netty.NetUtils; import org.apache.hedwig.client.netty.impl.AbstractHChannelManager; import org.apache.hedwig.client.netty.impl.ClientChannelPipelineFactory; import org.apache.hedwig.client.netty.impl.HChannelHandler; import org.apache.hedwig.client.netty.impl.HChannelImpl; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.exceptions.PubSubException.TopicBusyException; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.Either; import static org.apache.hedwig.util.VarArgs.va; /** * Simple HChannel Manager which establish a connection for each subscription. */ public class SimpleHChannelManager extends AbstractHChannelManager { private static Logger logger = LoggerFactory.getLogger(SimpleHChannelManager.class); // Concurrent Map to store the cached Channel connections on the client side // to a server host for a given Topic + SubscriberId combination. For each // TopicSubscriber, we want a unique Channel connection to the server for // it. We can also get the ResponseHandler tied to the Channel via the // Channel Pipeline. protected final CleanupChannelMap topicSubscriber2Channel; // Concurrent Map to store Message handler for each topic + sub id combination. // Store it here instead of in SubscriberResponseHandler as we don't want to lose the handler // user set when connection is recovered protected final ConcurrentMap topicSubscriber2MessageHandler = new ConcurrentHashMap(); // PipelineFactory to create subscription netty channels to the appropriate server private final ClientChannelPipelineFactory subscriptionChannelPipelineFactory; public SimpleHChannelManager(ClientConfiguration cfg, ChannelFactory socketFactory) { super(cfg, socketFactory); topicSubscriber2Channel = new CleanupChannelMap(); this.subscriptionChannelPipelineFactory = new SimpleSubscriptionChannelPipelineFactory(cfg, this); } @Override public void submitOp(final PubSubData pubSubData) { /** * In the simple hchannel implementation that if a client closes a subscription * and tries to attach to it immediately, it could get a TOPIC_BUSY response. This * is because, a subscription is closed simply by closing the channel, and the hub * side may not have been notified of the channel disconnection event by the time * the new subscription request comes in. To solve this, retry up to 5 times. * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-513} */ if (OperationType.SUBSCRIBE.equals(pubSubData.operationType)) { final Callback origCb = pubSubData.getCallback(); final AtomicInteger retries = new AtomicInteger(5); final Callback wrapperCb = new Callback() { @Override public void operationFinished(Object ctx, ResponseBody resultOfOperation) { origCb.operationFinished(ctx, resultOfOperation); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (exception instanceof ServiceDownException && exception.getCause() instanceof TopicBusyException && retries.decrementAndGet() > 0) { logger.warn("TOPIC_DOWN from server using simple channel scheme." + "This could be due to the channel disconnection from a close" + " not having been triggered on the server side. Retrying"); SimpleHChannelManager.super.submitOp(pubSubData); return; } origCb.operationFailed(ctx, exception); } }; pubSubData.setCallback(wrapperCb); } super.submitOp(pubSubData); } @Override protected ClientChannelPipelineFactory getSubscriptionChannelPipelineFactory() { return subscriptionChannelPipelineFactory; } @Override protected HChannel createAndStoreSubscriptionChannel(Channel channel) { // for simple channel, we don't store subscription channel now // we store it until we received success response InetSocketAddress host = NetUtils.getHostFromChannel(channel); return new HChannelImpl(host, channel, this, getSubscriptionChannelPipelineFactory()); } @Override protected HChannel createAndStoreSubscriptionChannel(InetSocketAddress host) { // for simple channel, we don't store subscription channel now // we store it until we received success response return new HChannelImpl(host, this, getSubscriptionChannelPipelineFactory()); } protected Either storeSubscriptionChannel( TopicSubscriber topicSubscriber, PubSubData txn, Channel channel) { InetSocketAddress host = NetUtils.getHostFromChannel(channel); HChannel newHChannel = new HChannelImpl(host, channel, this, getSubscriptionChannelPipelineFactory()); boolean replaced = topicSubscriber2Channel.replaceChannel( topicSubscriber, txn.getOriginalChannelForResubscribe(), newHChannel); if (replaced) { return Either.of(replaced, newHChannel); } else { return Either.of(replaced, null); } } @Override protected HChannel getSubscriptionChannel(InetSocketAddress host) { return null; } @Override protected HChannel getSubscriptionChannelByTopicSubscriber(TopicSubscriber subscriber) { HChannel channel = topicSubscriber2Channel.getChannel(subscriber); if (null != channel) { // there is no channel established for this subscription return channel; } else { InetSocketAddress host = topic2Host.get(subscriber.getTopic()); if (null == host) { return null; } else { channel = getSubscriptionChannel(host); if (null == channel) { channel = createAndStoreSubscriptionChannel(host); } return channel; } } } @Override protected void onSubscriptionChannelDisconnected(InetSocketAddress host, Channel channel) { logger.info("Subscription Channel {} disconnected from {}.", va(channel, host)); try { // get hchannel handler HChannelHandler channelHandler = HChannelImpl.getHChannelHandlerFromChannel(channel); channelHandler.getSubscribeResponseHandler() .onChannelDisconnected(host, channel); } catch (NoResponseHandlerException nrhe) { logger.warn("No Channel Handler found for channel {} when it disconnected.", channel); } } @Override public SubscribeResponseHandler getSubscribeResponseHandler(TopicSubscriber topicSubscriber) { HChannel hChannel = topicSubscriber2Channel.getChannel(topicSubscriber); if (null == hChannel) { return null; } Channel channel = hChannel.getChannel(); if (null == channel) { return null; } try { HChannelHandler channelHandler = HChannelImpl.getHChannelHandlerFromChannel(channel); return channelHandler.getSubscribeResponseHandler(); } catch (NoResponseHandlerException nrhe) { logger.warn("No Channel Handler found for channel {}, topic subscriber {}.", channel, topicSubscriber); return null; } } @Override public void startDelivery(TopicSubscriber topicSubscriber, MessageHandler messageHandler) throws ClientNotSubscribedException, AlreadyStartDeliveryException { startDelivery(topicSubscriber, messageHandler, false); } @Override protected void restartDelivery(TopicSubscriber topicSubscriber) throws ClientNotSubscribedException, AlreadyStartDeliveryException { startDelivery(topicSubscriber, null, true); } private void startDelivery(TopicSubscriber topicSubscriber, MessageHandler messageHandler, boolean restart) throws ClientNotSubscribedException, AlreadyStartDeliveryException { // Make sure we know about this topic subscription on the client side // exists. The assumption is that the client should have in memory the // Channel created for the TopicSubscriber once the server has sent // an ack response to the initial subscribe request. SubscribeResponseHandler subscribeResponseHandler = getSubscribeResponseHandler(topicSubscriber); if (null == subscribeResponseHandler || !subscribeResponseHandler.hasSubscription(topicSubscriber)) { logger.error("Client is not yet subscribed to {}.", topicSubscriber); throw new ClientNotSubscribedException("Client is not yet subscribed to " + topicSubscriber); } MessageHandler existedMsgHandler = topicSubscriber2MessageHandler.get(topicSubscriber); if (restart) { // restart using existing msg handler messageHandler = existedMsgHandler; } else { // some has started delivery but not stop it if (null != existedMsgHandler) { throw new AlreadyStartDeliveryException("A message handler has been started for topic subscriber " + topicSubscriber); } if (messageHandler != null) { if (null != topicSubscriber2MessageHandler.putIfAbsent(topicSubscriber, messageHandler)) { throw new AlreadyStartDeliveryException("Someone is also starting delivery for topic subscriber " + topicSubscriber); } } } // tell subscribe response handler to start delivering messages for topicSubscriber subscribeResponseHandler.startDelivery(topicSubscriber, messageHandler); } public void stopDelivery(TopicSubscriber topicSubscriber) throws ClientNotSubscribedException { // Make sure we know that this topic subscription on the client side // exists. The assumption is that the client should have in memory the // Channel created for the TopicSubscriber once the server has sent // an ack response to the initial subscribe request. SubscribeResponseHandler subscribeResponseHandler = getSubscribeResponseHandler(topicSubscriber); if (null == subscribeResponseHandler || !subscribeResponseHandler.hasSubscription(topicSubscriber)) { logger.error("Client is not yet subscribed to {}.", topicSubscriber); throw new ClientNotSubscribedException("Client is not yet subscribed to " + topicSubscriber); } // tell subscribe response handler to stop delivering messages for a given topic subscriber topicSubscriber2MessageHandler.remove(topicSubscriber); subscribeResponseHandler.stopDelivery(topicSubscriber); } @Override public void asyncCloseSubscription(final TopicSubscriber topicSubscriber, final Callback callback, final Object context) { HChannel hChannel = topicSubscriber2Channel.removeChannel(topicSubscriber); if (null == hChannel) { logger.warn("Trying to close a subscription when we don't have a subscribe channel cached for {}", topicSubscriber); callback.operationFinished(context, (ResponseBody)null); return; } Channel channel = hChannel.getChannel(); if (null == channel) { callback.operationFinished(context, (ResponseBody)null); return; } try { HChannelImpl.getHChannelHandlerFromChannel(channel).closeExplicitly(); } catch (NoResponseHandlerException nrhe) { logger.warn("No Channel Handler found when closing {}'s channel {}.", channel, topicSubscriber); } ChannelFuture future = channel.close(); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { logger.error("Failed to close the subscription channel for {}", topicSubscriber); callback.operationFailed(context, new ServiceDownException( "Failed to close the subscription channel for " + topicSubscriber)); } else { callback.operationFinished(context, (ResponseBody)null); } } }); } @Override protected void checkTimeoutRequestsOnSubscriptionChannels() { // timeout task may be started before constructing topicSubscriber2Channel if (null == topicSubscriber2Channel) { return; } for (HChannel channel : topicSubscriber2Channel.getChannels()) { try { HChannelHandler channelHandler = HChannelImpl.getHChannelHandlerFromChannel(channel.getChannel()); channelHandler.checkTimeoutRequests(); } catch (NoResponseHandlerException nrhe) { continue; } } } @Override protected void closeSubscriptionChannels() { topicSubscriber2Channel.close(); } } SimpleSubscribeResponseHandler.java000066400000000000000000000303661244507361200415060ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/simple/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl.simple; import java.net.InetSocketAddress; import java.util.Set; import java.util.Collections; import java.util.concurrent.ConcurrentHashMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.data.PubSubData; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.handlers.SubscribeResponseHandler; import org.apache.hedwig.client.netty.HChannel; import org.apache.hedwig.client.netty.HChannelManager; import org.apache.hedwig.client.netty.impl.AbstractHChannelManager; import org.apache.hedwig.client.netty.impl.AbstractSubscribeResponseHandler; import org.apache.hedwig.client.netty.impl.ActiveSubscriber; import org.apache.hedwig.client.netty.impl.HChannelImpl; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.Either; public class SimpleSubscribeResponseHandler extends AbstractSubscribeResponseHandler { private static Logger logger = LoggerFactory.getLogger(SimpleSubscribeResponseHandler.class); /** * Simple Active Subscriber enabling client-side throttling. */ static class SimpleActiveSubscriber extends ActiveSubscriber { // Set to store all of the outstanding subscribed messages that are pending // to be consumed by the client app's MessageHandler. If this ever grows too // big (e.g. problem at the client end for message consumption), we can // throttle things by temporarily setting the Subscribe Netty Channel // to not be readable. When the Set has shrunk sufficiently, we can turn the // channel back on to read new messages. private final Set outstandingMsgSet; public SimpleActiveSubscriber(ClientConfiguration cfg, AbstractHChannelManager channelManager, TopicSubscriber ts, PubSubData op, SubscriptionPreferences preferences, Channel channel, HChannel hChannel) { super(cfg, channelManager, ts, op, preferences, channel, hChannel); outstandingMsgSet = Collections.newSetFromMap( new ConcurrentHashMap( cfg.getMaximumOutstandingMessages(), 1.0f)); } @Override protected void unsafeDeliverMessage(Message message) { // Add this "pending to be consumed" message to the outstandingMsgSet. outstandingMsgSet.add(message); // Check if we've exceeded the max size for the outstanding message set. if (outstandingMsgSet.size() >= cfg.getMaximumOutstandingMessages() && channel.isReadable()) { // Too many outstanding messages so throttle it by setting the Netty // Channel to not be readable. if (logger.isDebugEnabled()) { logger.debug("Too many outstanding messages ({}) so throttling the subscribe netty Channel", outstandingMsgSet.size()); } channel.setReadable(false); } super.unsafeDeliverMessage(message); } @Override public synchronized void messageConsumed(Message message) { super.messageConsumed(message); // Remove this consumed message from the outstanding Message Set. outstandingMsgSet.remove(message); // Check if we throttled message consumption previously when the // outstanding message limit was reached. For now, only turn the // delivery back on if there are no more outstanding messages to // consume. We could make this a configurable parameter if needed. if (!channel.isReadable() && outstandingMsgSet.size() == 0) { if (logger.isDebugEnabled()) logger.debug("Message consumption has caught up so okay to turn off" + " throttling of messages on the subscribe channel for {}", topicSubscriber); channel.setReadable(true); } } @Override public synchronized void startDelivery(MessageHandler messageHandler) throws AlreadyStartDeliveryException, ClientNotSubscribedException { super.startDelivery(messageHandler); // Now make the TopicSubscriber Channel readable (it is set to not be // readable when the initial subscription is done). Note that this is an // asynchronous call. If this fails (not likely), the futureListener // will just log an error message for now. ChannelFuture future = channel.setReadable(true); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { logger.error("Unable to make subscriber Channel readable in startDelivery call for {}", topicSubscriber); } } }); } @Override public synchronized void stopDelivery() { super.stopDelivery(); // Now make the TopicSubscriber channel not-readable. This will buffer // up messages if any are sent from the server. Note that this is an // asynchronous call. If this fails (not likely), the futureListener // will just log an error message for now. ChannelFuture future = channel.setReadable(false); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { logger.error("Unable to make subscriber Channel not readable in stopDelivery call for {}", topicSubscriber); } } }); } } // Track which subscriber is alive in this response handler // Which is used for backward compat, since old version hub // server doesn't carry (topic, subscriberid) in each message. private volatile TopicSubscriber origTopicSubscriber; private volatile ActiveSubscriber origActiveSubscriber; private SimpleHChannelManager sChannelManager; protected SimpleSubscribeResponseHandler(ClientConfiguration cfg, HChannelManager channelManager) { super(cfg, channelManager); sChannelManager = (SimpleHChannelManager) channelManager; } @Override protected ActiveSubscriber createActiveSubscriber( ClientConfiguration cfg, AbstractHChannelManager channelManager, TopicSubscriber ts, PubSubData op, SubscriptionPreferences preferences, Channel channel, HChannel hChannel) { return new SimpleActiveSubscriber(cfg, channelManager, ts, op, preferences, channel, hChannel); } @Override protected synchronized ActiveSubscriber getActiveSubscriber(TopicSubscriber ts) { if (null == origTopicSubscriber || !origTopicSubscriber.equals(ts)) { return null; } return origActiveSubscriber; } private synchronized ActiveSubscriber getActiveSubscriber() { return origActiveSubscriber; } @Override public synchronized boolean hasSubscription(TopicSubscriber ts) { if (null == origTopicSubscriber) { return false; } return origTopicSubscriber.equals(ts); } @Override protected synchronized boolean removeSubscription(TopicSubscriber ts, ActiveSubscriber ss) { if (null != origTopicSubscriber && !origTopicSubscriber.equals(ts)) { return false; } origTopicSubscriber = null; origActiveSubscriber = null; return super.removeSubscription(ts, ss); } @Override public void handleResponse(PubSubResponse response, PubSubData pubSubData, Channel channel) throws Exception { // If this was not a successful response to the Subscribe request, we // won't be using the Netty Channel created so just close it. if (!response.getStatusCode().equals(StatusCode.SUCCESS)) { HChannelImpl.getHChannelHandlerFromChannel(channel).closeExplicitly(); channel.close(); } super.handleResponse(response, pubSubData, channel); } @Override public void handleSubscribeMessage(PubSubResponse response) { Message message = response.getMessage(); ActiveSubscriber ss = getActiveSubscriber(); if (null == ss) { logger.error("No Subscriber is alive receiving its message {}.", MessageIdUtils.msgIdToReadableString(message.getMsgId())); return; } ss.handleMessage(message); } @Override protected Either handleSuccessResponse( TopicSubscriber ts, PubSubData pubSubData, Channel channel) { // Store the mapping for the TopicSubscriber to the Channel. // This is so we can control the starting and stopping of // message deliveries from the server on that Channel. Store // this only on a successful ack response from the server. Either result = sChannelManager.storeSubscriptionChannel(ts, pubSubData, channel); if (result.left()) { return Either.of(StatusCode.SUCCESS, result.right()); } else { StatusCode code; if (pubSubData.isResubscribeRequest()) { code = StatusCode.RESUBSCRIBE_EXCEPTION; } else { code = StatusCode.CLIENT_ALREADY_SUBSCRIBED; } return Either.of(code, null); } } @Override protected synchronized void postHandleSuccessResponse( TopicSubscriber ts, ActiveSubscriber as) { origTopicSubscriber = ts; origActiveSubscriber = as; } @Override public void asyncCloseSubscription(final TopicSubscriber topicSubscriber, final Callback callback, final Object context) { // nothing to do just clear status // channel manager takes the responsibility to close the channel callback.operationFinished(context, (ResponseBody)null); } } SimpleSubscriptionChannelPipelineFactory.java000066400000000000000000000042331244507361200435350ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/impl/simple/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty.impl.simple; import java.util.HashMap; import java.util.Map; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.handlers.AbstractResponseHandler; import org.apache.hedwig.client.handlers.CloseSubscriptionResponseHandler; import org.apache.hedwig.client.netty.impl.AbstractHChannelManager; import org.apache.hedwig.client.netty.impl.ClientChannelPipelineFactory; import org.apache.hedwig.client.netty.impl.HChannelHandler; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; public class SimpleSubscriptionChannelPipelineFactory extends ClientChannelPipelineFactory { public SimpleSubscriptionChannelPipelineFactory(ClientConfiguration cfg, SimpleHChannelManager channelManager) { super(cfg, channelManager); } @Override protected Map createResponseHandlers() { Map handlers = new HashMap(); handlers.put(OperationType.SUBSCRIBE, new SimpleSubscribeResponseHandler(cfg, channelManager)); handlers.put(OperationType.CLOSESUBSCRIPTION, new CloseSubscriptionResponseHandler(cfg, channelManager)); return handlers; } } package-info.java000066400000000000000000000116031244507361200334610ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * A Netty based Hedwig client implementation. * *

Components

* * The netty based implementation contains following components: *
    *
  • {@link HChannel}: A interface wrapper of netty {@link org.jboss.netty.channel.Channel} * to submit hedwig's {@link org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest}s * to target host.
  • *
  • {@link HChanneHandler}: A wrapper of netty {@link org.jboss.netty.channel.ChannelHandler} * to handle events of its underlying netty channel, such as responses received, channel * disconnected, etc. A {@link HChannelHandler} is bound with a {@link HChannel}.
  • *
  • {@link HChannelManager}: A manager manages all established {@link HChannel}s. * It provides a clean interface for publisher/subscriber to send * {@link org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest}s
  • *
* *

Main Flow

* *
    *
  • {@link HedwigPublisher}/{@link HedwigSubscriber} delegates {@link HChannelManager} * to submit pub/sub requests.
  • *
  • {@link HChannelManager} find the owner hubs, establish a {@link HChannel} to hub servers * and send the requests to them.
  • *
  • {@link HChannelHandler} dispatches responses to target * {@link org.apache.hedwig.client.handlers.AbstractResponseHandler} to process.
  • *
  • {@link HChannelHandler} detects an underlying netty {@link org.jboss.netty.channel.Channel} * disconnected. It calles {@link HChannelManager} to clear cached {@link HChannel} that * it bound with. For non-subscritpion channels, it would fail all pending requests; * For subscription channels, it would fail all pending requests and retry to reconnect * those successful subscriptions.
  • *
* *

HChannel

* * Two kinds of {@link HChannel}s provided in current implementation. {@link HChannelImpl} * provides the ability to multiplex pub/sub requests in an underlying netty * {@link org.jboss.netty.channel.Channel}, while {@link DefaultServerChannel} provides the * ability to establish a netty channel {@link org.jboss.netty.channel.Channel} for a pub/sub * request. After the underlying netty channel is estabilished, it would be converted into * a {@link HChannelImpl} by {@link HChannelManager#submitOpThruChannel(pubSubData, channel)}. * * Although {@link HChannelImpl} provides multiplexing ability, it still could be used for * one-channel-per-subscription case, which just sent only one subscribe request thru the * underlying channel. * *

HChannelHandler

* * {@link HChannelHandler} is generic netty {@link org.jboss.netty.channel.ChannelHandler}, * which handles events from the underlying channel. A HChannelHandler is bound with * a {@link HChannel} as channel pipeplien when the underlying channel is established. It * takes the responsibility of dispatching response to target response handler. For a * non-subscription channel, it just handles PUBLISH and UNSUBSCRIBE responses. * For a subscription channel, it handles SUBSCRIBE response. For consume requests, * we treated them in a fire-and-forget way, so they are not need to be handled by any response * handler. * *

HChannelManager

* * {@link HChannelManager} manages all outstanding connections to target hub servers for a client. * Since a subscription channel acts quite different from a non-subscription channel, the basic * implementation {@link AbstractHChannelManager} manages non-subscription channels and * subscription channels in different channel sets. Currently hedwig client provides * {@link SimpleHChannelManager} which manages subscription channels in one-channel-per-subscription * way. In future, if we want to multiplex multiple subscriptions in one channel, we just need * to provide an multiplexing version of {@link AbstractHChannelManager} which manages channels * in multiplexing way, and a multiplexing version of {@link org.apache.hedwig.client.handlers.SubscribeResponseHandler} * which handles multiple subscriptions in one channel. */ package org.apache.hedwig.client.netty; bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/ssl/000077500000000000000000000000001244507361200300065ustar00rootroot00000000000000SslClientContextFactory.java000066400000000000000000000025641244507361200353760ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/ssl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.ssl; import javax.net.ssl.SSLContext; import org.apache.hedwig.client.conf.ClientConfiguration; public class SslClientContextFactory extends SslContextFactory { public SslClientContextFactory(ClientConfiguration cfg) { try { // Create the SSL context. ctx = SSLContext.getInstance("TLS"); ctx.init(null, getTrustManagers(), null); } catch (Exception ex) { throw new RuntimeException(ex); } } @Override protected boolean isClient() { return true; } } SslContextFactory.java000066400000000000000000000042011244507361200342250ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/client/ssl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.ssl; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLEngine; import javax.net.ssl.TrustManager; import javax.net.ssl.X509TrustManager; public abstract class SslContextFactory { protected SSLContext ctx; public SSLContext getContext() { return ctx; } protected abstract boolean isClient(); public SSLEngine getEngine() { SSLEngine engine = ctx.createSSLEngine(); engine.setUseClientMode(isClient()); return engine; } protected TrustManager[] getTrustManagers() { return new TrustManager[] { new X509TrustManager() { // Always trust, even if invalid. @Override public X509Certificate[] getAcceptedIssuers() { return new X509Certificate[0]; } @Override public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException { // Always trust. } @Override public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException { // Always trust. } } }; } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/conf/000077500000000000000000000000001244507361200266545ustar00rootroot00000000000000AbstractConfiguration.java000066400000000000000000000037611244507361200337420ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/conf/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.conf; import java.net.URL; import org.apache.commons.configuration.CompositeConfiguration; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; import org.apache.commons.configuration.PropertiesConfiguration; public abstract class AbstractConfiguration { protected CompositeConfiguration conf; protected AbstractConfiguration() { conf = new CompositeConfiguration(); } /** * Return real configuration object * * @return configuration */ public Configuration getConf() { return conf; } /** * You can load configurations in precedence order. The first one takes * precedence over any loaded later. * * @param confURL */ public void loadConf(URL confURL) throws ConfigurationException { Configuration loadedConf = new PropertiesConfiguration(confURL); conf.addConfiguration(loadedConf); } /** * Add configuration object. * * @param conf configuration object */ public void addConf(Configuration otherConf) throws ConfigurationException { conf.addConfiguration(otherConf); } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/filter/000077500000000000000000000000001244507361200272145ustar00rootroot00000000000000ClientMessageFilter.java000066400000000000000000000016771244507361200337040ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/filter/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.filter; /** * Message Filter running in client-side. */ public interface ClientMessageFilter extends MessageFilterBase { } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/filter/MessageFilterBase.java000066400000000000000000000035241244507361200334100ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.filter; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; public interface MessageFilterBase { /** * Set subscription preferences. * * preferences of the subscriber will be passed to message filter when * the message filter attaches to its subscription either in server-side or client-side. * * @param topic * Topic Name. * @param subscriberId * Subscriber Id. * @param preferences * Subscription Preferences. * @return message filter */ public MessageFilterBase setSubscriptionPreferences(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences); /** * Tests whether a particular message passes the filter or not * * @param message * @return */ public boolean testMessage(Message message); } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/filter/PipelineFilter.java000066400000000000000000000045661244507361200330050ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.filter; import java.io.IOException; import java.util.List; import java.util.LinkedList; import com.google.protobuf.ByteString; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; /** * A filter filters messages in pipeline. */ public class PipelineFilter extends LinkedList implements ServerMessageFilter { @Override public ServerMessageFilter initialize(Configuration conf) throws ConfigurationException, IOException { for (ServerMessageFilter filter : this) { filter.initialize(conf); } return this; } @Override public void uninitialize() { while (!isEmpty()) { ServerMessageFilter filter = removeLast(); filter.uninitialize(); } } @Override public MessageFilterBase setSubscriptionPreferences(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences) { for (ServerMessageFilter filter : this) { filter.setSubscriptionPreferences(topic, subscriberId, preferences); } return this; } @Override public boolean testMessage(Message message) { for (ServerMessageFilter filter : this) { if (!filter.testMessage(message)) { return false; } } return true; } } ServerMessageFilter.java000066400000000000000000000031741244507361200337260ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/filter/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.filter; import java.io.IOException; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; /** * Message Filter running in server-side. Hub server uses reflection to * instantiate a message filter to filter messages. */ public interface ServerMessageFilter extends MessageFilterBase { /** * Initialize the message filter. * * @param conf * Configuration Object. An MessageFilter might read settings from it. * @return message filter * @throws IOException when failed to initialize message filter */ public ServerMessageFilter initialize(Configuration conf) throws ConfigurationException, IOException; /** * Uninitialize the message filter. */ public void uninitialize(); } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/000077500000000000000000000000001244507361200267045ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/Callback.java000066400000000000000000000030721244507361200312450ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import org.apache.hedwig.exceptions.PubSubException; /** * This class is used for callbacks for asynchronous operations * */ public interface Callback { /** * This method is called when the asynchronous operation finishes * * @param ctx * @param resultOfOperation */ public abstract void operationFinished(Object ctx, T resultOfOperation); /** * This method is called when the operation failed due to some reason. The * reason for failure is passed in. * * @param ctx * The context for the callback * @param exception * The reason for the failure of the scan */ public abstract void operationFailed(Object ctx, PubSubException exception); } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/CallbackUtils.java000066400000000000000000000145771244507361200323020ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.atomic.AtomicInteger; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.CompositeException; public class CallbackUtils { /** * A callback that waits for all of a number of events to fire. If any fail, * then fail the final callback with a composite exception. * * TODO: change this to use any Exception and make CompositeException * generic, not a PubSubException. * * @param expected * Number of expected callbacks. * @param cb * The final callback to call. * @param ctx * @param logger * May be null. * @param successMsg * If not null, then this is logged on success. * @param failureMsg * If not null, then this is logged on failure. * @param eagerErrorHandler * If not null, then this will be executed after the first * failure (but before the final failure callback). Useful for * releasing resources, etc. as soon as we know the composite * operation is doomed. * @return the generated callback */ public static Callback multiCallback(final int expected, final Callback cb, final Object ctx, final Logger logger, final String successMsg, final String failureMsg, Runnable eagerErrorHandler) { if (expected == 0) { cb.operationFinished(ctx, null); return null; } else { return new Callback() { final AtomicInteger done = new AtomicInteger(); final LinkedBlockingQueue exceptions = new LinkedBlockingQueue(); private void tick() { if (done.incrementAndGet() == expected) { if (exceptions.isEmpty()) { cb.operationFinished(ctx, null); } else { cb.operationFailed(ctx, new CompositeException(exceptions)); } } } @Override public void operationFailed(Object ctx, PubSubException exception) { if (logger != null && failureMsg != null) logger.error(failureMsg, exception); exceptions.add(exception); tick(); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { if (logger != null && successMsg != null) logger.info(successMsg); tick(); } }; } } /** * A callback that waits for all of a number of events to fire. If any fail, * then fail the final callback with a composite exception. */ public static Callback multiCallback(int expected, Callback cb, Object ctx) { return multiCallback(expected, cb, ctx, null, null, null, null); } /** * A callback that waits for all of a number of events to fire. If any fail, * then fail the final callback with a composite exception. */ public static Callback multinCallback(int expected, Callback cb, Object ctx, Runnable eagerErrorHandler) { return multiCallback(expected, cb, ctx, null, null, null, eagerErrorHandler); } private static Callback nop = new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { } @Override public void operationFinished(Object ctx, Void resultOfOperation) { } }; /** * A do-nothing callback. */ public static Callback nop() { return nop; } /** * Logs what happened before continuing the callback chain. */ public static Callback logger(final Logger logger, final String successMsg, final String failureMsg, final Callback cont) { return new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error(failureMsg, exception); if (cont != null) cont.operationFailed(ctx, exception); } @Override public void operationFinished(Object ctx, T resultOfOperation) { logger.info(successMsg); if (cont != null) cont.operationFinished(ctx, resultOfOperation); } }; } /** * Logs what happened (no continuation). */ public static Callback logger(Logger logger, String successMsg, String failureMsg) { return logger(logger, successMsg, failureMsg, nop()); } /** * Return a Callback that just calls the given Callback cb with the * bound result. */ public static Callback curry(final Callback cb, final T result) { return new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { cb.operationFinished(ctx, result); } }; } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/ConcurrencyUtils.java000066400000000000000000000030531244507361200330630ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import java.util.concurrent.BlockingQueue; import java.util.concurrent.CyclicBarrier; public class ConcurrencyUtils { public static > void put(V queue, U value) { try { queue.put(value); } catch (Exception ex) { throw new RuntimeException(ex); } } public static T take(BlockingQueue queue) { try { return queue.take(); } catch (Exception ex) { throw new RuntimeException(ex); } } public static void await(CyclicBarrier barrier) { try { barrier.await(); } catch (Exception ex) { throw new RuntimeException(ex); } } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/Either.java000066400000000000000000000025411244507361200307710ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; public class Either { private T x; private U y; private Either(T x, U y) { this.x = x; this.y = y; } public static Either of(T x, U y) { return new Either(x, y); } public static Either left(T x) { return new Either(x, null); } public static Either right(U y) { return new Either(null, y); } public T left() { return x; } public U right() { return y; } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/FileUtils.java000066400000000000000000000056201244507361200314520ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import java.io.File; import java.io.IOException; import java.util.LinkedList; import java.util.List; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class FileUtils { static DirDeleterThred dirDeleterThread; static Logger log = LoggerFactory.getLogger(FileUtils.class); static { dirDeleterThread = new DirDeleterThred(); Runtime.getRuntime().addShutdownHook(dirDeleterThread); } public static File createTempDirectory(String prefix) throws IOException { return createTempDirectory(prefix, null); } public static File createTempDirectory(String prefix, String suffix) throws IOException { File tempDir = File.createTempFile(prefix, suffix); if (!tempDir.delete()) { throw new IOException("Could not delete temp file: " + tempDir.getAbsolutePath()); } if (!tempDir.mkdir()) { throw new IOException("Could not create temp directory: " + tempDir.getAbsolutePath()); } dirDeleterThread.addDirToDelete(tempDir); return tempDir; } static class DirDeleterThred extends Thread { List dirsToDelete = new LinkedList(); public synchronized void addDirToDelete(File dir) { dirsToDelete.add(dir); } @Override public void run() { synchronized (this) { for (File dir : dirsToDelete) { deleteDirectory(dir); } } } protected void deleteDirectory(File dir) { if (dir.isFile()) { if (!dir.delete()) { log.error("Could not delete " + dir.getAbsolutePath()); } return; } File[] files = dir.listFiles(); if (files == null) { return; } for (File f : files) { deleteDirectory(f); } if (!dir.delete()) { log.error("Could not delete directory: " + dir.getAbsolutePath()); } } } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/HedwigSocketAddress.java000066400000000000000000000122501244507361200334350ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import java.net.InetSocketAddress; /** * This is a data wrapper class that is basically an InetSocketAddress with one * extra piece of information for the SSL port (optional). This is used by * Hedwig so we can encapsulate both regular and SSL port information in one * data structure. Hedwig hub servers can be configured to listen on the * standard regular port and additionally on an optional SSL port. The String * representation of a HedwigSocketAddress is: :: */ public class HedwigSocketAddress { // Member fields that make up this class. private final String hostname; private final int port; private final int sslPort; private final InetSocketAddress socketAddress; private final InetSocketAddress sslSocketAddress; // Constants used by this class. public static final String COLON = ":"; private static final int NO_SSL_PORT = -1; // Constructor that takes in both a regular and SSL port. public HedwigSocketAddress(String hostname, int port, int sslPort) { this.hostname = hostname; this.port = port; this.sslPort = sslPort; socketAddress = new InetSocketAddress(hostname, port); if (sslPort != NO_SSL_PORT) sslSocketAddress = new InetSocketAddress(hostname, sslPort); else sslSocketAddress = null; } // Constructor that only takes in a regular port. public HedwigSocketAddress(String hostname, int port) { this(hostname, port, NO_SSL_PORT); } // Constructor from a String "serialized" version of this class. public HedwigSocketAddress(String addr) { String[] parts = addr.split(COLON); this.hostname = parts[0]; this.port = Integer.parseInt(parts[1]); if (parts.length > 2) this.sslPort = Integer.parseInt(parts[2]); else this.sslPort = NO_SSL_PORT; socketAddress = new InetSocketAddress(hostname, port); if (sslPort != NO_SSL_PORT) sslSocketAddress = new InetSocketAddress(hostname, sslPort); else sslSocketAddress = null; } // Public getters public String getHostname() { return hostname; } public int getPort() { return port; } public int getSSLPort() { return sslPort; } // Method to return an InetSocketAddress for the regular port. public InetSocketAddress getSocketAddress() { return socketAddress; } // Method to return an InetSocketAddress for the SSL port. // Note that if no SSL port (or an invalid value) was passed // during object creation, this call will throw an IllegalArgumentException // (runtime exception). public InetSocketAddress getSSLSocketAddress() { return sslSocketAddress; } // Method to determine if this object instance is SSL enabled or not // (contains a valid SSL port). public boolean isSSLEnabled() { return sslPort != NO_SSL_PORT; } // Return the String "serialized" version of this object. @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(hostname).append(COLON).append(port).append(COLON).append(sslPort); return sb.toString(); } // Implement an equals method comparing two HedwigSocketAddress objects. @Override public boolean equals(Object obj) { if (!(obj instanceof HedwigSocketAddress)) return false; HedwigSocketAddress that = (HedwigSocketAddress) obj; return (this.hostname.equals(that.hostname) && (this.port == that.port) && (this.sslPort == that.sslPort)); } @Override public int hashCode() { return (this.hostname + this.port + this.sslPort).hashCode(); } // Static helper method to return the string representation for an // InetSocketAddress. The HedwigClient can only operate in SSL or non-SSL // mode. So the server hosts it connects to will just be an // InetSocketAddress instead of a HedwigSocketAddress. This utility method // can be used so we can store these server hosts as strings (ByteStrings) // in various places (e.g. list of server hosts we've connected to // or wrote to unsuccessfully). public static String sockAddrStr(InetSocketAddress addr) { return addr.getAddress().getHostAddress() + ":" + addr.getPort(); } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/Option.java000066400000000000000000000022251244507361200310200ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; public class Option { private T x; public static Option of(T x) { return new Option(x); } public static Option of() { return new Option(); } public Option() { } public Option(T x) { this.x = x; } public T get() { return x; } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/Pair.java000066400000000000000000000022231244507361200304410ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; public class Pair { private T x; private U y; public Pair(T x, U y) { this.x = x; this.y = y; } public static Pair of(T x, U y) { return new Pair(x, y); } public T first() { return x; } public U second() { return y; } } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/PathUtils.java000066400000000000000000000037721244507361200314750ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import java.io.File; import java.util.ArrayList; import java.util.List; public class PathUtils { /** Generate all prefixes for a path. "/a/b/c" -> ["/a","/a/b","/a/b/c"] */ public static List prefixes(String path) { List prefixes = new ArrayList(); StringBuilder prefix = new StringBuilder(); for (String comp : path.split("/+")) { // Skip the first (empty) path component. if (!comp.equals("")) { prefix.append("/").append(comp); prefixes.add(prefix.toString()); } } return prefixes; } /** Return true iff prefix is a prefix of path. */ public static boolean isPrefix(String prefix, String path) { String[] as = prefix.split("/+"), bs = path.split("/+"); if (as.length > bs.length) return false; for (int i = 0; i < as.length; i++) if (!as[i].equals(bs[i])) return false; return true; } /** Like File.getParent but always uses the / separator. */ public static String parent(String path) { return new File(path).getParent().replace("\\", "/"); } } SubscriptionListener.java000066400000000000000000000030531244507361200336630ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; /** * This class is used for subscriber to listen on subscription event. */ public interface SubscriptionListener { /** * Process an event from a subscription. *

* NOTE: It would be better to not run blocking operations in a * listener implementation. *

* * @param topic * Topic Name * @param subscriberId * Subscriber Id * @param event * Event tell what happened to the subscription. */ public void processEvent(ByteString topic, ByteString subscriberId, SubscriptionEvent event); } bookkeeper-release-4.2.4/hedwig-client/src/main/java/org/apache/hedwig/util/VarArgs.java000066400000000000000000000016551244507361200311230ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; public class VarArgs { public static Object[] va(Object...args) { return args; } } bookkeeper-release-4.2.4/hedwig-client/src/test/000077500000000000000000000000001244507361200215625ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/test/java/000077500000000000000000000000001244507361200225035ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/000077500000000000000000000000001244507361200232725ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/apache/000077500000000000000000000000001244507361200245135ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/apache/hedwig/000077500000000000000000000000001244507361200257625ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/apache/hedwig/client/000077500000000000000000000000001244507361200272405ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/apache/hedwig/client/AppTest.java000066400000000000000000000026541244507361200314720ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client; import junit.framework.Test; import junit.framework.TestCase; import junit.framework.TestSuite; /** * Unit test for simple App. */ public class AppTest extends TestCase { /** * Create the test case * * @param testName * name of the test case */ public AppTest(String testName) { super(testName); } /** * @return the suite of tests being tested */ public static Test suite() { return new TestSuite(AppTest.class); } /** * Rigourous Test :-) */ public void testApp() { assertTrue(true); } } bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/apache/hedwig/util/000077500000000000000000000000001244507361200267375ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/apache/hedwig/util/TestFileUtils.java000066400000000000000000000026601244507361200323460ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import java.io.File; import org.junit.Test; import junit.framework.TestCase; public class TestFileUtils extends TestCase { @Test(timeout=60000) public void testCreateTmpDirectory() throws Exception { String prefix = "abc"; String suffix = "def"; File dir = FileUtils.createTempDirectory(prefix, suffix); assertTrue(dir.isDirectory()); assertTrue(dir.getName().startsWith(prefix)); assertTrue(dir.getName().endsWith(suffix)); FileUtils.dirDeleterThread.start(); FileUtils.dirDeleterThread.join(); assertFalse(dir.exists()); } } TestHedwigSocketAddress.java000066400000000000000000000077511244507361200342630ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/apache/hedwig/util/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import java.net.InetSocketAddress; import junit.framework.TestCase; import org.junit.Test; public class TestHedwigSocketAddress extends TestCase { // Common values used by tests private String hostname = "localhost"; private int port = 4080; private int sslPort = 9876; private int invalidPort = -9999; private String COLON = ":"; @Test(timeout=60000) public void testCreateWithSSLPort() throws Exception { HedwigSocketAddress addr = new HedwigSocketAddress(hostname, port, sslPort); assertTrue(addr.getSocketAddress().equals(new InetSocketAddress(hostname, port))); assertTrue(addr.getSSLSocketAddress().equals(new InetSocketAddress(hostname, sslPort))); } @Test(timeout=60000) public void testCreateWithNoSSLPort() throws Exception { HedwigSocketAddress addr = new HedwigSocketAddress(hostname, port); assertTrue(addr.getSocketAddress().equals(new InetSocketAddress(hostname, port))); assertTrue(addr.getSSLSocketAddress() == null); } @Test(timeout=60000) public void testCreateFromStringWithSSLPort() throws Exception { HedwigSocketAddress addr = new HedwigSocketAddress(hostname+COLON+port+COLON+sslPort); assertTrue(addr.getSocketAddress().equals(new InetSocketAddress(hostname, port))); assertTrue(addr.getSSLSocketAddress().equals(new InetSocketAddress(hostname, sslPort))); } @Test(timeout=60000) public void testCreateFromStringWithNoSSLPort() throws Exception { HedwigSocketAddress addr = new HedwigSocketAddress(hostname+COLON+port); assertTrue(addr.getSocketAddress().equals(new InetSocketAddress(hostname, port))); assertTrue(addr.getSSLSocketAddress() == null); } @Test(timeout=60000) public void testCreateWithInvalidRegularPort() throws Exception { boolean success = false; try { new HedwigSocketAddress(hostname+COLON+invalidPort); } catch (IllegalArgumentException e) { success = true; } assertTrue(success); } @Test(timeout=60000) public void testCreateWithInvalidSSLPort() throws Exception { boolean success = false; try { new HedwigSocketAddress(hostname, port, invalidPort); } catch (IllegalArgumentException e) { success = true; } assertTrue(success); } @Test(timeout=60000) public void testToStringConversion() throws Exception { HedwigSocketAddress addr = new HedwigSocketAddress(hostname, port, sslPort); HedwigSocketAddress addr2 = new HedwigSocketAddress(addr.toString()); assertTrue(addr.getSocketAddress().equals(addr2.getSocketAddress())); assertTrue(addr.getSSLSocketAddress().equals(addr2.getSSLSocketAddress())); addr.toString().equals(addr2.toString()); } @Test(timeout=60000) public void testIsSSLEnabledFlag() throws Exception { HedwigSocketAddress sslAddr = new HedwigSocketAddress(hostname, port, sslPort); assertTrue(sslAddr.isSSLEnabled()); HedwigSocketAddress addr = new HedwigSocketAddress(hostname, port); assertFalse(addr.isSSLEnabled()); } } bookkeeper-release-4.2.4/hedwig-client/src/test/java/org/apache/hedwig/util/TestPathUtils.java000066400000000000000000000041721244507361200323630ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.util; import java.util.Arrays; import junit.framework.TestCase; import org.junit.Test; public class TestPathUtils extends TestCase { @Test(timeout=60000) public void testPrefixes() { assertEquals(Arrays.asList(new String[] { "/a", "/a/b", "/a/b/c" }), PathUtils.prefixes("/a/b/c")); assertEquals(Arrays.asList(new String[] { "/a", "/a/b", "/a/b/c" }), PathUtils.prefixes("///a///b///c")); } @Test(timeout=60000) public void testIsPrefix() { String[] paths = new String[] { "/", "/a", "/a/b" }; for (int i = 0; i < paths.length; i++) { for (int j = 0; j <= i; j++) { assertTrue(PathUtils.isPrefix(paths[j], paths[i])); assertTrue(PathUtils.isPrefix(paths[j], paths[i] + "/")); assertTrue(PathUtils.isPrefix(paths[j] + "/", paths[i])); assertTrue(PathUtils.isPrefix(paths[j] + "/", paths[i] + "/")); } for (int j = i + 1; j < paths.length; j++) { assertFalse(PathUtils.isPrefix(paths[j], paths[i])); assertFalse(PathUtils.isPrefix(paths[j], paths[i] + "/")); assertFalse(PathUtils.isPrefix(paths[j] + "/", paths[i])); assertFalse(PathUtils.isPrefix(paths[j] + "/", paths[i] + "/")); } } } } bookkeeper-release-4.2.4/hedwig-protocol/000077500000000000000000000000001244507361200203775ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/pom.xml000066400000000000000000000073231244507361200217210ustar00rootroot00000000000000 4.0.0 org.apache.bookkeeper bookkeeper 4.2.4 hedwig-protocol jar hedwig-protocol http://maven.apache.org com.google.protobuf protobuf-java ${protobuf.version} compile junit junit 4.8.1 test org.slf4j slf4j-api 1.6.4 org.slf4j slf4j-log4j12 1.6.4 install maven-assembly-plugin 2.2.1 true org.apache.rat apache-rat-plugin 0.7 **/PubSubProtocol.java org.codehaus.mojo findbugs-maven-plugin ${basedir}/src/main/resources/findbugsExclude.xml protobuf maven-antrun-plugin generate-sources default-cli run bookkeeper-release-4.2.4/hedwig-protocol/src/000077500000000000000000000000001244507361200211665ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/000077500000000000000000000000001244507361200221125ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/000077500000000000000000000000001244507361200230335ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/000077500000000000000000000000001244507361200236225ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/000077500000000000000000000000001244507361200250435ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/000077500000000000000000000000001244507361200263125ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/exceptions/000077500000000000000000000000001244507361200304735ustar00rootroot00000000000000PubSubException.java000066400000000000000000000231101244507361200343330ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/exceptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.exceptions; import java.util.Collection; import java.util.Iterator; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; @SuppressWarnings("serial") public abstract class PubSubException extends Exception { protected StatusCode code; protected PubSubException(StatusCode code, String msg) { super(msg); this.code = code; } protected PubSubException(StatusCode code, Throwable t) { super(t); this.code = code; } protected PubSubException(StatusCode code, String msg, Throwable t) { super(msg, t); this.code = code; } public static PubSubException create(StatusCode code, String msg) { if (code == StatusCode.CLIENT_ALREADY_SUBSCRIBED) { return new ClientAlreadySubscribedException(msg); } else if (code == StatusCode.CLIENT_NOT_SUBSCRIBED) { return new ClientNotSubscribedException(msg); } else if (code == StatusCode.MALFORMED_REQUEST) { return new MalformedRequestException(msg); } else if (code == StatusCode.NO_SUCH_TOPIC) { return new NoSuchTopicException(msg); } else if (code == StatusCode.NOT_RESPONSIBLE_FOR_TOPIC) { return new ServerNotResponsibleForTopicException(msg); } else if (code == StatusCode.SERVICE_DOWN) { return new ServiceDownException(msg); } else if (code == StatusCode.COULD_NOT_CONNECT) { return new CouldNotConnectException(msg); } else if (code == StatusCode.TOPIC_BUSY) { return new TopicBusyException(msg); } else if (code == StatusCode.BAD_VERSION) { return new BadVersionException(msg); } else if (code == StatusCode.NO_TOPIC_PERSISTENCE_INFO) { return new NoTopicPersistenceInfoException(msg); } else if (code == StatusCode.TOPIC_PERSISTENCE_INFO_EXISTS) { return new TopicPersistenceInfoExistsException(msg); } else if (code == StatusCode.NO_SUBSCRIPTION_STATE) { return new NoSubscriptionStateException(msg); } else if (code == StatusCode.SUBSCRIPTION_STATE_EXISTS) { return new SubscriptionStateExistsException(msg); } else if (code == StatusCode.NO_TOPIC_OWNER_INFO) { return new NoTopicOwnerInfoException(msg); } else if (code == StatusCode.TOPIC_OWNER_INFO_EXISTS) { return new TopicOwnerInfoExistsException(msg); } else if (code == StatusCode.INVALID_MESSAGE_FILTER) { return new InvalidMessageFilterException(msg); } else if (code == StatusCode.RESUBSCRIBE_EXCEPTION) { return new ResubscribeException(msg); } /* * Insert new ones here */ else if (code == StatusCode.UNCERTAIN_STATE) { return new UncertainStateException(msg); } // Finally the catch all exception (for unexpected error conditions) else { return new UnexpectedConditionException("Unknow status code:" + code.getNumber() + ", msg: " + msg); } } public StatusCode getCode() { return code; } public static class ClientAlreadySubscribedException extends PubSubException { public ClientAlreadySubscribedException(String msg) { super(StatusCode.CLIENT_ALREADY_SUBSCRIBED, msg); } } public static class ClientNotSubscribedException extends PubSubException { public ClientNotSubscribedException(String msg) { super(StatusCode.CLIENT_NOT_SUBSCRIBED, msg); } } public static class ResubscribeException extends PubSubException { public ResubscribeException(String msg) { super(StatusCode.RESUBSCRIBE_EXCEPTION, msg); } } public static class MalformedRequestException extends PubSubException { public MalformedRequestException(String msg) { super(StatusCode.MALFORMED_REQUEST, msg); } } public static class NoSuchTopicException extends PubSubException { public NoSuchTopicException(String msg) { super(StatusCode.NO_SUCH_TOPIC, msg); } } public static class ServerNotResponsibleForTopicException extends PubSubException { // Note the exception message serves as the name of the responsible host public ServerNotResponsibleForTopicException(String responsibleHost) { super(StatusCode.NOT_RESPONSIBLE_FOR_TOPIC, responsibleHost); } } public static class TopicBusyException extends PubSubException { public TopicBusyException(String msg) { super(StatusCode.TOPIC_BUSY, msg); } } public static class ServiceDownException extends PubSubException { public ServiceDownException(String msg) { super(StatusCode.SERVICE_DOWN, msg); } public ServiceDownException(Exception e) { super(StatusCode.SERVICE_DOWN, e); } public ServiceDownException(String msg, Throwable t) { super(StatusCode.SERVICE_DOWN, msg, t); } } public static class CouldNotConnectException extends PubSubException { public CouldNotConnectException(String msg) { super(StatusCode.COULD_NOT_CONNECT, msg); } } public static class BadVersionException extends PubSubException { public BadVersionException(String msg) { super(StatusCode.BAD_VERSION, msg); } } public static class NoTopicPersistenceInfoException extends PubSubException { public NoTopicPersistenceInfoException(String msg) { super(StatusCode.NO_TOPIC_PERSISTENCE_INFO, msg); } } public static class TopicPersistenceInfoExistsException extends PubSubException { public TopicPersistenceInfoExistsException(String msg) { super(StatusCode.TOPIC_PERSISTENCE_INFO_EXISTS, msg); } } public static class NoSubscriptionStateException extends PubSubException { public NoSubscriptionStateException(String msg) { super(StatusCode.NO_SUBSCRIPTION_STATE, msg); } } public static class SubscriptionStateExistsException extends PubSubException { public SubscriptionStateExistsException(String msg) { super(StatusCode.SUBSCRIPTION_STATE_EXISTS, msg); } } public static class NoTopicOwnerInfoException extends PubSubException { public NoTopicOwnerInfoException(String msg) { super(StatusCode.NO_TOPIC_OWNER_INFO, msg); } } public static class TopicOwnerInfoExistsException extends PubSubException { public TopicOwnerInfoExistsException(String msg) { super(StatusCode.TOPIC_OWNER_INFO_EXISTS, msg); } } public static class InvalidMessageFilterException extends PubSubException { public InvalidMessageFilterException(String msg) { super(StatusCode.INVALID_MESSAGE_FILTER, msg); } public InvalidMessageFilterException(String msg, Throwable t) { super(StatusCode.INVALID_MESSAGE_FILTER, msg, t); } } public static class UncertainStateException extends PubSubException { public UncertainStateException(String msg) { super(StatusCode.UNCERTAIN_STATE, msg); } } // The catch all exception (for unexpected error conditions) public static class UnexpectedConditionException extends PubSubException { public UnexpectedConditionException(String msg) { super(StatusCode.UNEXPECTED_CONDITION, msg); } public UnexpectedConditionException(String msg, Throwable t) { super(StatusCode.UNEXPECTED_CONDITION, msg, t); } } // The composite exception (for concurrent operations). public static class CompositeException extends PubSubException { private final Collection exceptions; public CompositeException(Collection exceptions) { super(StatusCode.COMPOSITE, compositeMessage(exceptions)); this.exceptions = exceptions; } public Collection getExceptions() { return exceptions; } /** Merges the message fields of the given Exceptions into a one line string. */ private static String compositeMessage(Collection exceptions) { StringBuilder builder = new StringBuilder("Composite exception: ["); Iterator iter = exceptions.iterator(); if (iter.hasNext()) builder.append(iter.next().getMessage()); while (iter.hasNext()) builder.append(" :: ").append(iter.next().getMessage()); return builder.append("]").toString(); } } public static class ClientNotSubscribedRuntimeException extends RuntimeException { } } bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/protocol/000077500000000000000000000000001244507361200301535ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/protoextensions/000077500000000000000000000000001244507361200315755ustar00rootroot00000000000000MapUtils.java000066400000000000000000000054761244507361200341330ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/protoextensions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.protoextensions; import java.util.HashMap; import java.util.Map; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class MapUtils { static final Logger logger = LoggerFactory.getLogger(MapUtils.class); public static String toString(PubSubProtocol.Map map) { StringBuilder sb = new StringBuilder(); int numEntries = map.getEntriesCount(); for (int i=0; i buildMap(PubSubProtocol.Map protoMap) { Map javaMap = new HashMap(); int numEntries = protoMap.getEntriesCount(); for (int i=0; i javaMap) { PubSubProtocol.Map.Builder mapBuilder = PubSubProtocol.Map.newBuilder(); for (Map.Entry entry : javaMap.entrySet()) { mapBuilder.addEntries(PubSubProtocol.Map.Entry.newBuilder().setKey(entry.getKey()) .setValue(entry.getValue())); } return mapBuilder; } } MessageIdUtils.java000066400000000000000000000123371244507361200352510ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/protoextensions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.protoextensions; import java.util.HashMap; import java.util.List; import java.util.Map; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.exceptions.PubSubException.UnexpectedConditionException; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.RegionSpecificSeqId; public class MessageIdUtils { public static String msgIdToReadableString(MessageSeqId seqId) { StringBuilder sb = new StringBuilder(); sb.append("local:"); sb.append(seqId.getLocalComponent()); String separator = ";"; for (RegionSpecificSeqId regionId : seqId.getRemoteComponentsList()) { sb.append(separator); sb.append(regionId.getRegion().toStringUtf8()); sb.append(':'); sb.append(regionId.getSeqId()); } return sb.toString(); } public static Map inMapForm(MessageSeqId msi) { Map map = new HashMap(); for (RegionSpecificSeqId lmsid : msi.getRemoteComponentsList()) { map.put(lmsid.getRegion(), lmsid); } return map; } public static boolean areEqual(MessageSeqId m1, MessageSeqId m2) { if (m1.getLocalComponent() != m2.getLocalComponent()) { return false; } if (m1.getRemoteComponentsCount() != m2.getRemoteComponentsCount()) { return false; } Map m2map = inMapForm(m2); for (RegionSpecificSeqId lmsid1 : m1.getRemoteComponentsList()) { RegionSpecificSeqId lmsid2 = m2map.get(lmsid1.getRegion()); if (lmsid2 == null) { return false; } if (lmsid1.getSeqId() != lmsid2.getSeqId()) { return false; } } return true; } public static Message mergeLocalSeqId(Message.Builder messageBuilder, long localSeqId) { MessageSeqId.Builder msidBuilder = MessageSeqId.newBuilder(messageBuilder.getMsgId()); msidBuilder.setLocalComponent(localSeqId); messageBuilder.setMsgId(msidBuilder); return messageBuilder.build(); } public static Message mergeLocalSeqId(Message orginalMessage, long localSeqId) { return mergeLocalSeqId(Message.newBuilder(orginalMessage), localSeqId); } /** * Compares two seq numbers represented as lists of longs. * * @param l1 * @param l2 * @return 1 if the l1 is greater, 0 if they are equal, -1 if l2 is greater * @throws UnexpectedConditionException * If the lists are of unequal length */ public static int compare(List l1, List l2) throws UnexpectedConditionException { if (l1.size() != l2.size()) { throw new UnexpectedConditionException("Seq-ids being compared have different sizes: " + l1.size() + " and " + l2.size()); } for (int i = 0; i < l1.size(); i++) { long v1 = l1.get(i); long v2 = l2.get(i); if (v1 == v2) { continue; } return v1 > v2 ? 1 : -1; } // All components equal return 0; } /** * Returns the element-wise vector maximum of the two vectors id1 and id2, * if we imagine them to be sparse representations of vectors. */ public static void takeRegionMaximum(MessageSeqId.Builder newIdBuilder, MessageSeqId id1, MessageSeqId id2) { Map id2Map = MessageIdUtils.inMapForm(id2); for (RegionSpecificSeqId rrsid1 : id1.getRemoteComponentsList()) { ByteString region = rrsid1.getRegion(); RegionSpecificSeqId rssid2 = id2Map.get(region); if (rssid2 == null) { newIdBuilder.addRemoteComponents(rrsid1); continue; } newIdBuilder.addRemoteComponents((rrsid1.getSeqId() > rssid2.getSeqId()) ? rrsid1 : rssid2); // remove from map id2Map.remove(region); } // now take the remaining components in the map and add them for (RegionSpecificSeqId rssid2 : id2Map.values()) { newIdBuilder.addRemoteComponents(rssid2); } } } PubSubResponseUtils.java000066400000000000000000000062211244507361200363220ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/protoextensions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.protoextensions; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.ProtocolVersion; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEventResponse; public class PubSubResponseUtils { /** * Change here if bumping up the version number that the server sends back */ public final static ProtocolVersion serverVersion = ProtocolVersion.VERSION_ONE; static PubSubResponse.Builder getBasicBuilder(StatusCode status) { return PubSubResponse.newBuilder().setProtocolVersion(serverVersion).setStatusCode(status); } public static PubSubResponse getSuccessResponse(long txnId) { return getBasicBuilder(StatusCode.SUCCESS).setTxnId(txnId).build(); } public static PubSubResponse getSuccessResponse(long txnId, ResponseBody respBody) { return getBasicBuilder(StatusCode.SUCCESS).setTxnId(txnId) .setResponseBody(respBody).build(); } public static PubSubResponse getResponseForException(PubSubException e, long txnId) { return getBasicBuilder(e.getCode()).setStatusMsg(e.getMessage()).setTxnId(txnId).build(); } public static PubSubResponse getResponseForSubscriptionEvent(ByteString topic, ByteString subscriberId, SubscriptionEvent event) { SubscriptionEventResponse.Builder eventBuilder = SubscriptionEventResponse.newBuilder().setEvent(event); ResponseBody.Builder respBuilder = ResponseBody.newBuilder().setSubscriptionEvent(eventBuilder); PubSubResponse response = PubSubResponse.newBuilder() .setProtocolVersion(ProtocolVersion.VERSION_ONE) .setStatusCode(StatusCode.SUCCESS).setTxnId(0) .setTopic(topic).setSubscriberId(subscriberId) .setResponseBody(respBuilder).build(); return response; } } SubscriptionStateUtils.java000066400000000000000000000077421244507361200371010ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/java/org/apache/hedwig/protoextensions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.protoextensions; import java.util.HashMap; import java.util.Map; import com.google.protobuf.ByteString; import com.google.protobuf.InvalidProtocolBufferException; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionState; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class SubscriptionStateUtils { static final Logger logger = LoggerFactory.getLogger(SubscriptionStateUtils.class); // For now, to differentiate hub subscribers from local ones, the // subscriberId will be prepended with a hard-coded prefix. Local // subscribers will validate that the subscriberId used cannot start with // this prefix. This is only used internally by the hub subscribers. public static final String HUB_SUBSCRIBER_PREFIX = "__"; public static SubscriptionData parseSubscriptionData(byte[] data) throws InvalidProtocolBufferException { try { return SubscriptionData.parseFrom(data); } catch (InvalidProtocolBufferException ex) { logger.info("Failed to parse data as SubscriptionData. Fall backward to parse it as SubscriptionState for backward compatability."); // backward compability SubscriptionState state = SubscriptionState.parseFrom(data); return SubscriptionData.newBuilder().setState(state).build(); } } public static String toString(SubscriptionData data) { StringBuilder sb = new StringBuilder(); if (data.hasState()) { sb.append("State : { ").append(toString(data.getState())).append(" };"); } if (data.hasPreferences()) { sb.append("Preferences : { ").append(toString(data.getPreferences())).append(" };"); } return sb.toString(); } public static String toString(SubscriptionState state) { StringBuilder sb = new StringBuilder(); sb.append("consumeSeqId: " + MessageIdUtils.msgIdToReadableString(state.getMsgId())); return sb.toString(); } public static String toString(SubscriptionPreferences preferences) { StringBuilder sb = new StringBuilder(); sb.append("System Preferences : ["); if (preferences.hasMessageBound()) { sb.append("(messageBound=").append(preferences.getMessageBound()) .append(")"); } sb.append("]"); if (preferences.hasOptions()) { sb.append(", Customized Preferences : ["); sb.append(MapUtils.toString(preferences.getOptions())); sb.append("]"); } return sb.toString(); } public static boolean isHubSubscriber(ByteString subscriberId) { return subscriberId.toStringUtf8().startsWith(HUB_SUBSCRIBER_PREFIX); } public static Map buildUserOptions(SubscriptionPreferences preferences) { if (preferences.hasOptions()) { return MapUtils.buildMap(preferences.getOptions()); } else { return new HashMap(); } } } bookkeeper-release-4.2.4/hedwig-protocol/src/main/protobuf/000077500000000000000000000000001244507361200237525ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/protobuf/PubSubProtocol.proto000066400000000000000000000216201244507361200277620ustar00rootroot00000000000000/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ option java_package = "org.apache.hedwig.protocol"; option optimize_for = SPEED; package Hedwig; enum ProtocolVersion{ VERSION_ONE = 1; } // common structure to store header or properties message Map { message Entry { optional string key = 1; optional bytes value = 2; } repeated Entry entries = 1; } // message header message MessageHeader { // user customized fields used for message filter optional Map properties = 1; // following are system properties in message header optional string messageType = 2; } /* * this is the structure that will be serialized */ message Message { required bytes body = 1; optional bytes srcRegion = 2; optional MessageSeqId msgId = 3; // message header optional MessageHeader header = 4; } message RegionSpecificSeqId { required bytes region = 1; required uint64 seqId = 2; } message MessageSeqId{ optional uint64 localComponent = 1; repeated RegionSpecificSeqId remoteComponents = 2; } enum OperationType{ PUBLISH = 0; SUBSCRIBE = 1; CONSUME = 2; UNSUBSCRIBE = 3; //the following two are only used for the hedwig proxy START_DELIVERY = 4; STOP_DELIVERY = 5; // end for requests only used for hedwig proxy CLOSESUBSCRIPTION = 6; } /* A PubSubRequest is just a union of the various request types, with * an enum telling us which type it is. The same can also be done through * extensions. We need one request type that we will deserialize into on * the server side. */ message PubSubRequest{ required ProtocolVersion protocolVersion = 1; required OperationType type = 2; repeated bytes triedServers = 3; required uint64 txnId = 4; optional bool shouldClaim = 5; required bytes topic = 6; //any authentication stuff and other general stuff here /* one entry for each type of request */ optional PublishRequest publishRequest = 52; optional SubscribeRequest subscribeRequest = 53; optional ConsumeRequest consumeRequest = 54; optional UnsubscribeRequest unsubscribeRequest = 55; optional StopDeliveryRequest stopDeliveryRequest = 56; optional StartDeliveryRequest startDeliveryRequest = 57; optional CloseSubscriptionRequest closeSubscriptionRequest = 58; } message PublishRequest{ required Message msg = 2; } // record all preferences for a subscription, // would be serialized to be stored in meta store message SubscriptionPreferences { // user customized subscription options optional Map options = 1; /// /// system defined options /// // message bound optional uint32 messageBound = 2; // server-side message filter optional string messageFilter = 3; // message window size, this is the maximum number of messages // which will be delivered without being consumed optional uint32 messageWindowSize = 4; } message SubscribeRequest{ required bytes subscriberId = 2; enum CreateOrAttach{ CREATE = 0; ATTACH = 1; CREATE_OR_ATTACH = 2; }; optional CreateOrAttach createOrAttach = 3 [default = CREATE_OR_ATTACH]; // wait for cross-regional subscriptions to be established before returning optional bool synchronous = 4 [default = false]; // @Deprecated. set message bound in SubscriptionPreferences optional uint32 messageBound = 5; // subscription options optional SubscriptionPreferences preferences = 6; // force attach subscription which would kill existed channel // this option doesn't need to be persisted optional bool forceAttach = 7 [default = false]; } // used in client only // options are stored in SubscriptionPreferences structure message SubscriptionOptions { // force attach subscription which would kill existed channel // this option doesn't need to be persisted optional bool forceAttach = 1 [default = false]; optional SubscribeRequest.CreateOrAttach createOrAttach = 2 [default = CREATE_OR_ATTACH]; optional uint32 messageBound = 3 [default = 0]; // user customized subscription options optional Map options = 4; // server-side message filter optional string messageFilter = 5; // message window size, this is the maximum number of messages // which will be delivered without being consumed optional uint32 messageWindowSize = 6; // enable resubscribe optional bool enableResubscribe = 7 [default = true]; } message ConsumeRequest{ required bytes subscriberId = 2; required MessageSeqId msgId = 3; //the msgId is cumulative: all messages up to this id are marked as consumed } message UnsubscribeRequest{ required bytes subscriberId = 2; } message CloseSubscriptionRequest { required bytes subscriberId = 2; } message StopDeliveryRequest{ required bytes subscriberId = 2; } message StartDeliveryRequest{ required bytes subscriberId = 2; } // Identify an event happened for a subscription enum SubscriptionEvent { // topic has changed ownership (hub server down or topic released) TOPIC_MOVED = 1; // subscription is force closed by other subscribers SUBSCRIPTION_FORCED_CLOSED = 2; } // a response carries an event for a subscription sent to client message SubscriptionEventResponse { optional SubscriptionEvent event = 1; } message PubSubResponse{ required ProtocolVersion protocolVersion = 1; required StatusCode statusCode = 2; required uint64 txnId = 3; optional string statusMsg = 4; //in case of a status code of NOT_RESPONSIBLE_FOR_TOPIC, the status //message will contain the name of the host actually responsible //for the topic //the following fields are sent in delivered messages optional Message message = 5; optional bytes topic = 6; optional bytes subscriberId = 7; // the following fields are sent by other requests optional ResponseBody responseBody = 8; } message PublishResponse { // If the request was a publish request, this was the message Id of the published message. required MessageSeqId publishedMsgId = 1; } message SubscribeResponse { optional SubscriptionPreferences preferences = 2; } message ResponseBody { optional PublishResponse publishResponse = 1; optional SubscribeResponse subscribeResponse = 2; optional SubscriptionEventResponse subscriptionEvent = 3; } enum StatusCode{ SUCCESS = 0; //client-side errors (4xx) MALFORMED_REQUEST = 401; NO_SUCH_TOPIC = 402; CLIENT_ALREADY_SUBSCRIBED = 403; CLIENT_NOT_SUBSCRIBED = 404; COULD_NOT_CONNECT = 405; TOPIC_BUSY = 406; RESUBSCRIBE_EXCEPTION = 407; //server-side errors (5xx) NOT_RESPONSIBLE_FOR_TOPIC = 501; SERVICE_DOWN = 502; UNCERTAIN_STATE = 503; INVALID_MESSAGE_FILTER = 504; //server-side meta manager errors (52x) BAD_VERSION = 520; NO_TOPIC_PERSISTENCE_INFO = 521; TOPIC_PERSISTENCE_INFO_EXISTS = 522; NO_SUBSCRIPTION_STATE = 523; SUBSCRIPTION_STATE_EXISTS = 524; NO_TOPIC_OWNER_INFO = 525; TOPIC_OWNER_INFO_EXISTS = 526; //For all unexpected error conditions UNEXPECTED_CONDITION = 600; COMPOSITE = 700; } //What follows is not the server client protocol, but server-internal structures that are serialized in ZK //They should eventually be moved into the server message SubscriptionState { required MessageSeqId msgId = 1; // @Deprecated. // It is a bad idea to put fields that don't change frequently // together with fields that change frequently // so move it to subscription preferences structure optional uint32 messageBound = 2; } message SubscriptionData { optional SubscriptionState state = 1; optional SubscriptionPreferences preferences = 2; } message LedgerRange{ required uint64 ledgerId = 1; optional MessageSeqId endSeqIdIncluded = 2; optional uint64 startSeqIdIncluded = 3; } message LedgerRanges{ repeated LedgerRange ranges = 1; } message ManagerMeta { required string managerImpl = 2; required uint32 managerVersion = 3; } message HubInfoData { required string hostname = 2; required uint64 czxid = 3; } message HubLoadData { required uint64 numTopics = 2; } bookkeeper-release-4.2.4/hedwig-protocol/src/main/resources/000077500000000000000000000000001244507361200241245ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-protocol/src/main/resources/findbugsExclude.xml000066400000000000000000000017711244507361200277670ustar00rootroot00000000000000 bookkeeper-release-4.2.4/hedwig-server/000077500000000000000000000000001244507361200200445ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/bin/000077500000000000000000000000001244507361200206145ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/bin/hedwig000077500000000000000000000153511244507361200220160ustar00rootroot00000000000000#!/usr/bin/env bash # #/** # * Copyright 2007 The Apache Software Foundation # * # * Licensed to the Apache Software Foundation (ASF) under one # * or more contributor license agreements. See the NOTICE file # * distributed with this work for additional information # * regarding copyright ownership. The ASF licenses this file # * to you under the Apache License, Version 2.0 (the # * "License"); you may not use this file except in compliance # * with the License. You may obtain a copy of the License at # * # * http://www.apache.org/licenses/LICENSE-2.0 # * # * Unless required by applicable law or agreed to in writing, software # * distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # * See the License for the specific language governing permissions and # * limitations under the License. # */ # check if net.ipv6.bindv6only is set to 1 bindv6only=$(/sbin/sysctl -n net.ipv6.bindv6only 2> /dev/null) if [ -n "$bindv6only" ] && [ "$bindv6only" -eq "1" ] then echo "Error: \"net.ipv6.bindv6only\" is set to 1 - Java networking could be broken" echo "For more info (the following page also applies to hedwig): http://wiki.apache.org/hadoop/HadoopIPv6" exit 1 fi # See the following page for extensive details on setting # up the JVM to accept JMX remote management: # http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html # by default we allow local JMX connections if [ "x$JMXLOCALONLY" = "x" ] then JMXLOCALONLY=false fi if [ "x$JMXDISABLE" = "x" ] then echo "JMX enabled by default" >&2 # for some reason these two options are necessary on jdk6 on Ubuntu # accord to the docs they are not necessary, but otw jconsole cannot # do a local attach JMX_ARGS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY" else echo "JMX disabled by user request" >&2 fi BINDIR=`dirname "$0"` HW_HOME=`cd $BINDIR/..;pwd` DEFAULT_CONF=$HW_HOME/conf/hw_server.conf DEFAULT_REGION_CLIENT_CONF=$HW_HOME/conf/hw_region_client.conf DEFAULT_LOG_CONF=$HW_HOME/conf/log4j.properties . $HW_HOME/conf/hwenv.sh # Check for the java to use if [[ -z $JAVA_HOME ]]; then JAVA=$(which java) if [ $? = 0 ]; then echo "JAVA_HOME not set, using java from PATH. ($JAVA)" else echo "Error: JAVA_HOME not set, and no java executable found in $PATH." 1>&2 exit 1 fi else JAVA=$JAVA_HOME/bin/java fi RELEASE_JAR=`ls $HW_HOME/hedwig-server-*.jar 2> /dev/null | tail -1` if [ $? == 0 ]; then HEDWIG_JAR=$RELEASE_JAR fi BUILT_JAR=`ls $HW_HOME/target/hedwig-server-*.jar 2> /dev/null | tail -1` if [ $? != 0 ] && [ ! -e "$HEDWIG_JAR" ]; then echo "\nCouldn't find hedwig jar."; echo "Make sure you've run 'mvn package'\n"; exit 1; elif [ -e "$BUILT_JAR" ]; then HEDWIG_JAR=$BUILT_JAR fi add_maven_deps_to_classpath() { MVN="mvn" if [ "$MAVEN_HOME" != "" ]; then MVN=${MAVEN_HOME}/bin/mvn fi # Need to generate classpath from maven pom. This is costly so generate it # and cache it. Save the file into our target dir so a mvn clean will get # clean it up and force us create a new one. f="${HW_HOME}/target/cached_classpath.txt" if [ ! -f "${f}" ] then ${MVN} -f "${HW_HOME}/pom.xml" dependency:build-classpath -Dmdep.outputFile="${f}" &> /dev/null fi HEDWIG_CLASSPATH=${CLASSPATH}:`cat "${f}"` } if [ -d "$HW_HOME/lib" ]; then for i in $HW_HOME/lib/*.jar; do HEDWIG_CLASSPATH=$HEDWIG_CLASSPATH:$i done else add_maven_deps_to_classpath fi hedwig_help() { cat < where command is one of: server Run the hedwig server console Run the hedwig admin console help This help message or command is the full name of a class with a defined main() method. Environment variables: HEDWIG_SERVER_CONF Hedwig server configuration file (default $DEFAULT_CONF) HEDWIG_REGION_CLIENT_CONF Configuration file for the hedwig client used by the region manager (default $DEFAULT_REGION_CLIENT_CONF) HEDWIG_CONSOLE_SERVER_CONF Server part configuration for hedwig console, used for metadata management (defaults to HEDWIG_SERVER_CONF) HEDWIG_CONSOLE_CLIENT_CONF Client part configuration for hedwig console, used for interacting with hub server. HEDWIG_LOG_CONF Log4j configuration file (default $DEFAULT_LOG_CONF) HEDWIG_ROOT_LOGGER Root logger for hedwig HEDWIG_LOG_DIR Log directory to store log files for hedwig server HEDWIG_LOG_FILE Log file name HEDWIG_EXTRA_OPTS Extra options to be passed to the jvm These variable can also be set in conf/hwenv.sh EOF } # if no args specified, show usage if [ $# = 0 ]; then hedwig_help; exit 1; fi # get arguments COMMAND=$1 shift if [ -z "$HEDWIG_SERVER_CONF" ]; then HEDWIG_SERVER_CONF=$DEFAULT_CONF; fi if [ -z "$HEDWIG_REGION_CLIENT_CONF" ]; then HEDWIG_REGION_CLIENT_CONF=$DEFAULT_REGION_CLIENT_CONF; fi if [ -z "$HEDWIG_LOG_CONF" ]; then HEDWIG_LOG_CONF=$DEFAULT_LOG_CONF fi HEDWIG_CLASSPATH="$HEDWIG_JAR:$HEDWIG_CLASSPATH" if [ "$HEDWIG_LOG_CONF" != "" ]; then HEDWIG_CLASSPATH="`dirname $HEDWIG_LOG_CONF`:$HEDWIG_CLASSPATH" OPTS="$OPTS -Dlog4j.configuration=`basename $HEDWIG_LOG_CONF`" fi OPTS="-cp $HEDWIG_CLASSPATH $OPTS $HEDWIG_EXTRA_OPTS" # Disable ipv6 as it can cause issues OPTS="$OPTS -Djava.net.preferIPv4Stack=true" # log directory & file HEDWIG_ROOT_LOGGER=${HEDWIG_ROOT_LOGGER:-"INFO,CONSOLE"} HEDWIG_LOG_DIR=${HEDWIG_LOG_DIR:-"$HW_HOME/logs"} HEDWIG_LOG_FILE=${HEDWIG_LOG_FILE:-"hedwig-server.log"} # Configure log configuration system properties OPTS="$OPTS -Dhedwig.root.logger=$HEDWIG_ROOT_LOGGER" OPTS="$OPTS -Dhedwig.log.dir=$HEDWIG_LOG_DIR" OPTS="$OPTS -Dhedwig.log.file=$HEDWIG_LOG_FILE" # Change to HW_HOME to support relative paths cd "$BK_HOME" if [ $COMMAND == "server" ]; then exec $JAVA $OPTS $JMX_ARGS org.apache.hedwig.server.netty.PubSubServer $HEDWIG_SERVER_CONF $HEDWIG_REGION_CLIENT_CONF $@ elif [ $COMMAND == "console" ]; then # hedwig console configuration server part if [ -z "$HEDWIG_CONSOLE_SERVER_CONF" ]; then HEDWIG_CONSOLE_SERVER_CONF=$HEDWIG_SERVER_CONF fi # hedwig console configuration client part if [ -n "$HEDWIG_CONSOLE_CLIENT_CONF" ]; then HEDWIG_CONSOLE_CLIENT_OPTIONS="-client-cfg $HEDWIG_CONSOLE_CLIENT_CONF" fi exec $JAVA $OPTS org.apache.hedwig.admin.console.HedwigConsole -server-cfg $HEDWIG_CONSOLE_SERVER_CONF $HEDWIG_CONSOLE_CLIENT_OPTIONS $@ elif [ $COMMAND == "help" ]; then hedwig_help; else exec $JAVA $OPTS $COMMAND $@ fi bookkeeper-release-4.2.4/hedwig-server/bin/hedwig-daemon.sh000077500000000000000000000076601244507361200236740ustar00rootroot00000000000000#!/usr/bin/env bash # #/** # * Licensed to the Apache Software Foundation (ASF) under one # * or more contributor license agreements. See the NOTICE file # * distributed with this work for additional information # * regarding copyright ownership. The ASF licenses this file # * to you under the Apache License, Version 2.0 (the # * "License"); you may not use this file except in compliance # * with the License. You may obtain a copy of the License at # * # * http://www.apache.org/licenses/LICENSE-2.0 # * # * Unless required by applicable law or agreed to in writing, software # * distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # * See the License for the specific language governing permissions and # * limitations under the License. # */ usage() { cat < where command is one of: server Run the hedwig server EOF } BINDIR=`dirname "$0"` HEDWIG_HOME=`cd $BINDIR/..;pwd` if [ -f $HEDWIG_HOME/conf/hwenv.sh ] then . $HEDWIG_HOME/conf/hwenv.sh fi HEDWIG_LOG_DIR=${HEDWIG_LOG_DIR:-"$HEDWIG_HOME/logs"} HEDWIG_ROOT_LOGGER=${HEDWIG_ROOT_LOGGER:-'INFO,ROLLINGFILE'} HEDWIG_STOP_TIMEOUT=${HEDWIG_STOP_TIMEOUT:-30} HEDWIG_PID_DIR=${HEDWIG_PID_DIR:-$HEDWIG_HOME/bin} if [ $# -lt 2 ] then echo "Error: no enough arguments provided." usage exit 1 fi startStop=$1 shift command=$1 shift case $command in (server) echo "doing $startStop $command ..." ;; (*) echo "Error: unknown service name $command" usage exit 1 ;; esac export HEDWIG_LOG_DIR=$HEDWIG_LOG_DIR export HEDWIG_ROOT_LOGGER=$HEDWIG_ROOT_LOGGER export HEDWIG_LOG_FILE=hedwig-$command-$HOSTNAME.log pid=$HEDWIG_PID_DIR/hedwig-$command.pid out=$HEDWIG_LOG_DIR/hedwig-$command-$HOSTNAME.out logfile=$HEDWIG_LOG_DIR/$HEDWIG_LOG_FILE rotate_out_log () { log=$1; num=5; if [ -n "$2" ]; then num=$2 fi if [ -f "$log" ]; then # rotate logs while [ $num -gt 1 ]; do prev=`expr $num - 1` [ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num" num=$prev done mv "$log" "$log.$num"; fi } mkdir -p "$HEDWIG_LOG_DIR" case $startStop in (start) if [ -f $pid ]; then if kill -0 `cat $pid` > /dev/null 2>&1; then echo $command running as process `cat $pid`. Stop it first. exit 1 fi fi rotate_out_log $out echo starting $command, logging to $logfile hedwig=$HEDWIG_HOME/bin/hedwig nohup $hedwig $command "$@" > "$out" 2>&1 < /dev/null & echo $! > $pid sleep 1; head $out sleep 2; if ! ps -p $! > /dev/null ; then exit 1 fi ;; (stop) if [ -f $pid ]; then TARGET_PID=`cat $pid` if kill -0 $TARGET_PID > /dev/null 2>&1; then echo stopping $command kill $TARGET_PID count=0 location=$HEDWIG_LOG_DIR while ps -p $TARGET_PID > /dev/null; do echo "Shutdown is in progress... Please wait..." sleep 1 count=`expr $count + 1` if [ "$count" = "$HEDWIG_STOP_TIMEOUT" ]; then break fi done if [ "$count" != "$HEDWIG_STOP_TIMEOUT" ]; then echo "Shutdown completed." exit 0 fi if kill -0 $TARGET_PID > /dev/null 2>&1; then fileName=$location/$command.out $JAVA_HOME/bin/jstack $TARGET_PID > $fileName echo Thread dumps are taken for analysis at $fileName echo forcefully stopping $command kill -9 $TARGET_PID >/dev/null 2>&1 echo Successfully stopped the process fi else echo no $command to stop fi rm $pid else echo no $command to stop fi ;; (*) usage exit 1 ;; esac bookkeeper-release-4.2.4/hedwig-server/conf/000077500000000000000000000000001244507361200207715ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/conf/hw_region_client.conf000066400000000000000000000032541244507361200251630ustar00rootroot00000000000000# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # This is the configuration file for the hedwig client used by the region manager # This parameter is a boolean flag indicating if communication with the # server should be done via SSL for encryption. The Hedwig server hubs also # need to be SSL enabled for this to work. # ssl_enabled=false # The maximum message size in bytes # max_message_size=2097152 # The maximum number of redirects we permit before signalling an error # max_server_redirects=2 # A flag indicating whether the client library should automatically send # consume messages to the server # auto_send_consume_message_enabled=true # The number of messages we buffer before sending a consume message # to the server # consumed_messages_buffer_size=5 # Support for client side throttling. # max_outstanding_messages=10 # The timeout in milliseconds before we error out any existing # requests # server_ack_response_timeout=30000 bookkeeper-release-4.2.4/hedwig-server/conf/hw_server.conf000066400000000000000000000132031244507361200236430ustar00rootroot00000000000000# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################ # ZooKeeper Settings ################################ # The ZooKeeper server host(s) for the Hedwig Server to use. zk_host=localhost:2181 # The number of milliseconds of each tick in ZooKeeper. zk_timeout=2000 ################################ # Hub Server Settings ################################ # Is the hub server running in standalone mode? # Default is false. standalone=false # The port at which the clients will connect. server_port=4080 # The SSL port at which the clients will connect (only if SSL is enabled). ssl_server_port=9876 # Flag indicating if the server should also operate in SSL mode. ssl_enabled=false # Name of the SSL certificate if available as a resource. # The certificate should be in pkcs12 format. # cert_name= # Path to the SSL certificate if available as a file. # The certificate should be in pkcs12 format. # cert_path= # Password used for pkcs12 certificate. # password= ####################################### # Publish and subscription parameters ####################################### # Max Message Size that a hub server could accept # max_message_size=1258291 # Message Sequence Interval to update subscription state to metadata store. # Default is 50. # consume_interval=50 # Time interval (in seconds) to release topic ownership. If the time interval # is less than zero, the ownership will never be released automatically. # Default is 0. # retention_secs=0 # Time interval (in milliseconds) to run messages consumed timer task to # delete those consumed ledgers in BookKeeper. # messages_consumed_thread_run_interval=60000 # Default maximum number of messages which can be delivered to a subscriber # without being consumed. We pause messages delivery to a subscriber when # reaching the window size. Default is 0, which means we never pause messages # delivery even a subscriber consumes nothing and it doesn't set any subscriber # specified message window size. # default_message_window_size=0 # The maximum number of entries stored in a ledger. When the number of entries # reaches this threshold, hub server will open a new ledger to write. Default is 0. # If it was set to 0, hub server will keep using same ledger to write entries unless # the topic ownership changed. # max_entries_per_ledger=0 ################################ # Region Related Settings ################################ # Region name that the hub server belongs to. # region=standalone # Regions list of a Hedwig instance. # The expected format for the regions parameter is Hostname:Port:SSLPort # with spaces in between each of regions. # regions= # Enabled ssl connections between regions or not. # (@Deprecated here. It is recommended to set in conf/hw_region_client.conf) # Default is false. # inter_region_ssl_enabled=false # Time interval (in milliseconds) to run thread to retry those failed # remote subscriptions in asynchronous mode. Default is 120000. # retry_remote_subscribe_thread_run_interval=120000 ################################ # ReadAhead Settings ################################ # Enable read ahead cache or not. If disabled, read requests # would access BookKeeper directly. # Default is true. # readahead_enabled=true # Number of entries to read ahead. Default value is 10. # readahead_count=10 # Max size of entries to read ahead. Default value is 4M. # readahead_size=4194304 # Max memory used for ReadAhead Cache. # Default value is minimum value of 2G or half of JVM max memory. # cache_size= # The backoff time (in milliseconds) to retry scans after failures. # Default value is 1000. # scan_backoff_ms=1000 # Sets the number of threads to be used for the read-ahead mechanism. # Default is the number of cores as returned with a call to # Runtime.getRuntime().availableProcessors(). # num_readahead_cache_threads= # Set TTL for cache entries. Each time adding new entry into the cache, # those expired cache entries would be discarded. If the value is set # to zero or less than zero, cache entry will not be evicted until the # cache is fullfilled or the messages are already consumed. By default # the value is zero. # cache_entry_ttl= ################################ # Metadata Settings ################################ # zookeeper prefix to store metadata if using zookeeper as metadata store. # Default value is "/hedwig". # zk_prefix=/hedwig # Enable metadata manager based topic manager. Default is false. # metadata_manager_based_topic_manager_enabled=false # Class name of metadata manager factory used to store metadata. # Default is null. # metadata_manager_factory_class= ################################ # BookKeeper Settings ################################ # Ensemble size of a ledger in BookKeeper. Default is 3. # bk_ensemble_size=3 # Write quorum size for a ledger in BookKeeper. Default is 2. # bk_write_quorum_size=2 # Ack quorum size for a ledger in BookKeeper. Default is 2. # bk_ack_quorum_size=2 bookkeeper-release-4.2.4/hedwig-server/conf/hwenv.sh000066400000000000000000000033341244507361200224570ustar00rootroot00000000000000#!/bin/sh # #/** # * Copyright 2007 The Apache Software Foundation # * # * Licensed to the Apache Software Foundation (ASF) under one # * or more contributor license agreements. See the NOTICE file # * distributed with this work for additional information # * regarding copyright ownership. The ASF licenses this file # * to you under the Apache License, Version 2.0 (the # * "License"); you may not use this file except in compliance # * with the License. You may obtain a copy of the License at # * # * http://www.apache.org/licenses/LICENSE-2.0 # * # * Unless required by applicable law or agreed to in writing, software # * distributed under the License is distributed on an "AS IS" BASIS, # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # * See the License for the specific language governing permissions and # * limitations under the License. # */ # Set JAVA_HOME here to override the environment setting # JAVA_HOME= # default settings for starting hedwig # HEDWIG_SERVER_CONF= # default settings for the region manager's hedwig client # HEDWIG_REGION_CLIENT_CONF= # default settings for the region manager's hedwig client # HEDWIG_CLIENT_CONF= # Server part configuration for hedwig console, # used for metadata management # HEDWIG_CONSOLE_SERVER_CONF= # Client part configuration for hedwig console, # used for interacting with hub server. # HEDWIG_CONSOLE_CLIENT_CONF= # Log4j configuration file # HEDWIG_LOG_CONF= # Logs location # HEDWIG_LOG_DIR= # Extra options to be passed to the jvm # HEDWIG_EXTRA_OPTS= #Folder where the hedwig server PID file should be stored #HEDWIG_PID_DIR= #Wait time before forcefully kill the hedwig server instance, if the stop is not successful #HEDWIG_STOP_TIMEOUT= bookkeeper-release-4.2.4/hedwig-server/conf/log4j.properties000066400000000000000000000055211244507361200241310ustar00rootroot00000000000000# # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # # # Hedwig Logging Configuration # # Format is " (, )+ # DEFAULT: console appender only # Define some default values that can be overridden by system properties hedwig.root.logger=WARN,CONSOLE hedwig.log.dir=. hedwig.log.file=hedwig-server.log hedwig.trace.file=hedwig-trace.log log4j.rootLogger=${hedwig.root.logger} # Example with rolling log file #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE # Example with rolling log file and tracing #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=INFO log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # # Add ROLLINGFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender log4j.appender.ROLLINGFILE.Threshold=INFO log4j.appender.ROLLINGFILE.File=${hedwig.log.dir}/${hedwig.log.file} log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # Max log file size of 10MB #log4j.appender.ROLLINGFILE.MaxFileSize=10MB # uncomment the next line to limit number of backup files #log4j.appender.ROLLINGFILE.MaxBackupIndex=10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=${hedwig.log.dir}/${hedwig.trace.file} log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n bookkeeper-release-4.2.4/hedwig-server/pom.xml000066400000000000000000000204011244507361200213560ustar00rootroot00000000000000 4.0.0 org.apache.bookkeeper bookkeeper 4.2.4 org.apache.hedwig.server.netty.PubSubServer ${basedir}/lib hedwig-server jar hedwig-server http://maven.apache.org junit junit 4.8.1 test org.slf4j slf4j-api 1.6.4 org.slf4j slf4j-log4j12 1.6.4 org.apache.bookkeeper hedwig-client ${project.parent.version} compile jar org.apache.derby derby 10.8.2.2 runtime org.apache.zookeeper zookeeper 3.4.3 compile org.apache.zookeeper zookeeper 3.4.3 test-jar test org.apache.bookkeeper bookkeeper-server ${project.parent.version} compile jar org.apache.bookkeeper bookkeeper-server ${project.parent.version} test test-jar log4j log4j 1.2.15 javax.mail mail javax.jms jms com.sun.jdmk jmxtools com.sun.jmx jmxri jline jline 0.9.94 org.apache.bookkeeper hedwig-server-compat410 4.1.0 test org.apache.bookkeeper bookkeeper-server org.apache.bookkeeper hedwig-server org.apache.bookkeeper hedwig-protocol org.apache.bookkeeper hedwig-client org.apache.bookkeeper hedwig-server-compat400 4.0.0 test org.apache.bookkeeper bookkeeper-server org.apache.bookkeeper hedwig-server org.apache.bookkeeper hedwig-protocol org.apache.bookkeeper hedwig-client org.apache.rat apache-rat-plugin 0.7 **/p12.pass maven-assembly-plugin 2.2.1 ../src/assemble/bin.xml org.codehaus.mojo findbugs-maven-plugin ${basedir}/src/main/resources/findbugsExclude.xml maven-antrun-plugin createbuilddir generate-test-resources run maven-dependency-plugin package copy-dependencies ${project.libdir} org.apache.maven.plugins maven-surefire-plugin target/derby.log target/zk_clientbase_build maven-clean-plugin 2.5 ${project.libdir} false bookkeeper-release-4.2.4/hedwig-server/src/000077500000000000000000000000001244507361200206335ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/000077500000000000000000000000001244507361200215575ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/000077500000000000000000000000001244507361200225005ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/000077500000000000000000000000001244507361200232675ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/000077500000000000000000000000001244507361200245105ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/000077500000000000000000000000001244507361200257575ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/admin/000077500000000000000000000000001244507361200270475ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/admin/HedwigAdmin.java000066400000000000000000000446071244507361200321050ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.admin; import java.util.Arrays; import java.util.ArrayList; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRange; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.meta.FactoryLayout; import org.apache.hedwig.server.meta.SubscriptionDataManager; import org.apache.hedwig.server.meta.TopicOwnershipManager; import org.apache.hedwig.server.meta.TopicPersistenceManager; import org.apache.hedwig.server.subscriptions.InMemorySubscriptionState; import org.apache.hedwig.server.topics.HubInfo; import org.apache.hedwig.server.topics.HubLoad; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.HedwigSocketAddress; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.data.Stat; import com.google.protobuf.ByteString; import com.google.protobuf.InvalidProtocolBufferException; /** * Hedwig Admin */ public class HedwigAdmin { static final Logger LOG = LoggerFactory.getLogger(HedwigAdmin.class); // NOTE: now it is fixed passwd used in hedwig static byte[] passwd = "sillysecret".getBytes(); protected final ZooKeeper zk; protected final BookKeeper bk; protected final MetadataManagerFactory mmFactory; protected final SubscriptionDataManager sdm; protected final TopicOwnershipManager tom; protected final TopicPersistenceManager tpm; // hub configurations protected final ServerConfiguration serverConf; // bookkeeper configurations protected final ClientConfiguration bkClientConf; protected final CountDownLatch zkReadyLatch = new CountDownLatch(1); // Empty watcher private class MyWatcher implements Watcher { public void process(WatchedEvent event) { if (Event.KeeperState.SyncConnected.equals(event.getState())) { zkReadyLatch.countDown(); } } } static class SyncObj { boolean finished = false; boolean success = false; T value = null; PubSubException exception = null; synchronized void success(T v) { finished = true; success = true; value = v; notify(); } synchronized void fail(PubSubException pse) { finished = true; success = false; exception = pse; notify(); } synchronized void block() { try { while (!finished) { wait(); } } catch (InterruptedException ie) { } } synchronized boolean isSuccess() { return success; } } /** * Stats of a hub */ public static class HubStats { HubInfo hubInfo; HubLoad hubLoad; public HubStats(HubInfo info, HubLoad load) { this.hubInfo = info; this.hubLoad = load; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("info : [").append(hubInfo.toString().trim().replaceAll("\n", ", ")) .append("], load : [").append(hubLoad.toString().trim().replaceAll("\n", ", ")) .append("]"); return sb.toString(); } } /** * Hedwig Admin Constructor * * @param bkConf * BookKeeper Client Configuration. * @param hubConf * Hub Server Configuration. * @throws Exception */ public HedwigAdmin(ClientConfiguration bkConf, ServerConfiguration hubConf) throws Exception { this.serverConf = hubConf; this.bkClientConf = bkConf; // connect to zookeeper zk = new ZooKeeper(hubConf.getZkHost(), hubConf.getZkTimeout(), new MyWatcher()); LOG.debug("Connecting to zookeeper {}, timeout = {}", hubConf.getZkHost(), hubConf.getZkTimeout()); // wait until connection is ready if (!zkReadyLatch.await(hubConf.getZkTimeout() * 2, TimeUnit.MILLISECONDS)) { throw new Exception("Count not establish connection with ZooKeeper after " + hubConf.getZkTimeout() * 2 + " ms."); } // construct the metadata manager factory mmFactory = MetadataManagerFactory.newMetadataManagerFactory(hubConf, zk); tpm = mmFactory.newTopicPersistenceManager(); tom = mmFactory.newTopicOwnershipManager(); sdm = mmFactory.newSubscriptionDataManager(); // connect to bookkeeper bk = new BookKeeper(bkClientConf, zk); LOG.debug("Connecting to bookkeeper"); } /** * Close the hedwig admin. * * @throws Exception */ public void close() throws Exception { tpm.close(); tom.close(); sdm.close(); mmFactory.shutdown(); bk.close(); zk.close(); } /** * Return zookeeper handle used in hedwig admin. * * @return zookeeper handle */ public ZooKeeper getZkHandle() { return zk; } /** * Return bookkeeper handle used in hedwig admin. * * @return bookkeeper handle */ public BookKeeper getBkHandle() { return bk; } /** * Return hub server configuration used in hedwig admin * * @return hub server configuration */ public ServerConfiguration getHubServerConf() { return serverConf; } /** * Return metadata manager factory. * * @return metadata manager factory instance. */ public MetadataManagerFactory getMetadataManagerFactory() { return mmFactory; } /** * Return bookeeper passwd used in hedwig admin * * @return bookeeper passwd */ public byte[] getBkPasswd() { return Arrays.copyOf(passwd, passwd.length); } /** * Return digest type used in hedwig admin * * @return bookeeper digest type */ public DigestType getBkDigestType() { return DigestType.CRC32; } /** * Dose topic exist? * * @param topic * Topic name * @return whether topic exists or not? * @throws Exception */ public boolean hasTopic(ByteString topic) throws Exception { // current persistence info is bound with a topic, so if there is persistence info // there is topic. final SyncObj syncObj = new SyncObj(); tpm.readTopicPersistenceInfo(topic, new Callback>() { @Override public void operationFinished(Object ctx, Versioned result) { if (null == result) { syncObj.success(false); } else { syncObj.success(true); } } @Override public void operationFailed(Object ctx, PubSubException pse) { syncObj.fail(pse); } }, syncObj); syncObj.block(); if (!syncObj.isSuccess()) { throw syncObj.exception; } return syncObj.value; } /** * Get available hubs. * * @return available hubs and their loads * @throws Exception */ public Map getAvailableHubs() throws Exception { String zkHubsPath = serverConf.getZkHostsPrefix(new StringBuilder()).toString(); Map hubs = new HashMap(); List hosts = zk.getChildren(zkHubsPath, false); for (String host : hosts) { String zkHubPath = serverConf.getZkHostsPrefix(new StringBuilder()) .append("/").append(host).toString(); HedwigSocketAddress addr = new HedwigSocketAddress(host); try { Stat stat = new Stat(); byte[] data = zk.getData(zkHubPath, false, stat); if (data == null) { continue; } HubLoad load = HubLoad.parse(new String(data)); HubInfo info = new HubInfo(addr, stat.getCzxid()); hubs.put(addr, new HubStats(info, load)); } catch (KeeperException ke) { LOG.warn("Couldn't read hub data from ZooKeeper", ke); } catch (InterruptedException ie) { LOG.warn("Interrupted during read", ie); } } return hubs; } /** * Get list of topics * * @return list of topics * @throws Exception */ public Iterator getTopics() throws Exception { return mmFactory.getTopics(); } /** * Return the topic owner of a topic * * @param topic * Topic name * @return the address of the owner of a topic * @throws Exception */ public HubInfo getTopicOwner(ByteString topic) throws Exception { final SyncObj syncObj = new SyncObj(); tom.readOwnerInfo(topic, new Callback>() { @Override public void operationFinished(Object ctx, Versioned result) { if (null == result) { syncObj.success(null); } else { syncObj.success(result.getValue()); } } @Override public void operationFailed(Object ctx, PubSubException pse) { syncObj.fail(pse); } }, syncObj); syncObj.block(); if (!syncObj.isSuccess()) { throw syncObj.exception; } return syncObj.value; } private static LedgerRange buildLedgerRange(long ledgerId, long startOfLedger, MessageSeqId endOfLedger) { LedgerRange.Builder builder = LedgerRange.newBuilder().setLedgerId(ledgerId).setStartSeqIdIncluded(startOfLedger) .setEndSeqIdIncluded(endOfLedger); return builder.build(); } /** * Return the ledger range forming the topic * * @param topic * Topic name * @return ledger ranges forming the topic * @throws Exception */ public List getTopicLedgers(ByteString topic) throws Exception { final SyncObj syncObj = new SyncObj(); tpm.readTopicPersistenceInfo(topic, new Callback>() { @Override public void operationFinished(Object ctx, Versioned result) { if (null == result) { syncObj.success(null); } else { syncObj.success(result.getValue()); } } @Override public void operationFailed(Object ctx, PubSubException pse) { syncObj.fail(pse); } }, syncObj); syncObj.block(); if (!syncObj.isSuccess()) { throw syncObj.exception; } LedgerRanges ranges = syncObj.value; if (null == ranges) { return null; } List results = new ArrayList(); List lrs = ranges.getRangesList(); long startSeqId = 1L; if (!lrs.isEmpty()) { LedgerRange range = lrs.get(0); if (!range.hasStartSeqIdIncluded() && range.hasEndSeqIdIncluded()) { long ledgerId = range.getLedgerId(); try { LedgerHandle lh = bk.openLedgerNoRecovery(ledgerId, DigestType.CRC32, passwd); long numEntries = lh.readLastConfirmed() + 1; long endOfLedger = range.getEndSeqIdIncluded().getLocalComponent(); startSeqId = endOfLedger - numEntries + 1; } catch (BKException.BKNoSuchLedgerExistsException be) { // ignore it } } } Iterator lrIter = lrs.iterator(); while (lrIter.hasNext()) { LedgerRange range = lrIter.next(); if (range.hasEndSeqIdIncluded()) { long endOfLedger = range.getEndSeqIdIncluded().getLocalComponent(); if (range.hasStartSeqIdIncluded()) { startSeqId = range.getStartSeqIdIncluded(); } else { range = buildLedgerRange(range.getLedgerId(), startSeqId, range.getEndSeqIdIncluded()); } results.add(range); if (startSeqId < endOfLedger + 1) { startSeqId = endOfLedger + 1; } continue; } if (lrIter.hasNext()) { throw new IllegalStateException("Ledger " + range.getLedgerId() + " for topic " + topic.toString() + " is not the last one but still does not have an end seq-id"); } if (range.hasStartSeqIdIncluded()) { startSeqId = range.getStartSeqIdIncluded(); } LedgerHandle lh = bk.openLedgerNoRecovery(range.getLedgerId(), DigestType.CRC32, passwd); long endOfLedger = startSeqId + lh.readLastConfirmed(); MessageSeqId endSeqId = MessageSeqId.newBuilder().setLocalComponent(endOfLedger).build(); results.add(buildLedgerRange(range.getLedgerId(), startSeqId, endSeqId)); } return results; } /** * Return subscriptions of a topic * * @param topic * Topic name * @return subscriptions of a topic * @throws Exception */ public Map getTopicSubscriptions(ByteString topic) throws Exception { final SyncObj> syncObj = new SyncObj>(); sdm.readSubscriptions(topic, new Callback>>() { @Override public void operationFinished(Object ctx, Map> result) { // It was just used to console tool to print some information, so don't need to return version for it // just keep the getTopicSubscriptions interface as before Map subs = new ConcurrentHashMap(); for (Map.Entry> subEntry : result.entrySet()) { subs.put(subEntry.getKey(), subEntry.getValue().getValue()); } syncObj.success(subs); } @Override public void operationFailed(Object ctx, PubSubException pse) { syncObj.fail(pse); } }, syncObj); syncObj.block(); if (!syncObj.isSuccess()) { throw syncObj.exception; } return syncObj.value; } /** * Return subscription state of a subscriber of topic * * @param topic * Topic name * @param subscriber * Subscriber name * @return subscription state * @throws Exception */ public SubscriptionData getSubscription(ByteString topic, ByteString subscriber) throws Exception { final SyncObj syncObj = new SyncObj(); sdm.readSubscriptionData(topic, subscriber, new Callback>() { @Override public void operationFinished(Object ctx, Versioned result) { if (null == result) { syncObj.success(null); } else { syncObj.success(result.getValue()); } } @Override public void operationFailed(Object ctx, PubSubException pse) { syncObj.fail(pse); } }, syncObj); syncObj.block(); if (!syncObj.isSuccess()) { throw syncObj.exception; } return syncObj.value; } /** * Format metadata for Hedwig. */ public void format() throws Exception { // format metadata first mmFactory.format(serverConf, zk); LOG.info("Formatted Hedwig metadata successfully."); // remove metadata layout FactoryLayout.deleteLayout(zk, serverConf); LOG.info("Removed old factory layout."); // create new metadata manager factory and write new metadata layout MetadataManagerFactory.createMetadataManagerFactory(serverConf, zk, serverConf.getMetadataManagerFactoryClass()); LOG.info("Created new factory layout."); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/admin/console/000077500000000000000000000000001244507361200305115ustar00rootroot00000000000000HedwigCommands.java000066400000000000000000000421151244507361200341710ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/admin/console/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.admin.console; import java.util.Map; import java.util.List; import java.util.LinkedList; import java.util.LinkedHashMap; /** * List all the available commands */ public final class HedwigCommands { static final String[] EMPTY_ARRAY = new String[0]; // // List all commands used to play with hedwig // /* PUB : publish a message to hedwig */ static final String PUB = "pub"; static final String PUB_DESC = "Publish a message to a topic in Hedwig"; static final String[] PUB_USAGE = new String[] { "usage: pub {topic} {message}", "", " {topic} : topic name.", " any printable string without spaces.", " {message} : message body.", " remaining arguments are used as message body to publish.", }; /* SUB : subscriber a topic in hedwig for a specified subscriber */ static final String SUB = "sub"; static final String SUB_DESC = "Subscribe a topic for a specified subscriber"; static final String[] SUB_USAGE = new String[] { "usage: sub {topic} {subscriber} [mode]", "", " {topic} : topic name.", " any printable string without spaces.", " {subscriber} : subscriber id.", " any printable string without spaces.", " [mode] : mode to create subscription.", " [receive] : bool. whether to start delivery to receive messages.", "", " available modes: (default value is 1)", " 0 = CREATE: create subscription.", " if the subscription is exsited, it will fail.", " 1 = ATTACH: attach to exsited subscription.", " if the subscription is not existed, it will faile.", " 2 = CREATE_OR_ATTACH:", " attach to subscription, if not existed create one." }; /* CLOSESUB : close the subscription of a subscriber for a topic */ static final String CLOSESUB = "closesub"; static final String CLOSESUB_DESC = "Close subscription of a subscriber to a specified topic"; static final String[] CLOSESUB_USAGE = new String[] { "usage: closesub {topic} {subscriber}", "", " {topic} : topic name.", " any printable string without spaces.", " {subscriber} : subscriber id.", " any printable string without spaces.", "", " NOTE: this command just cleanup subscription states on client side.", " You can try UNSUB to clean subscription states on server side.", }; /* UNSUB: unsubscribe of a subscriber to a topic */ static final String UNSUB = "unsub"; static final String UNSUB_DESC = "Unsubscribe a topic for a subscriber"; static final String[] UNSUB_USAGE = new String[] { "usage: unsub {topic} {subscriber}", "", " {topic} : topic name.", " any printable string without spaces.", " {subscriber} : subscriber id.", " any printable string without spaces.", "", " NOTE: this command will cleanup subscription states on server side.", " You can try CLOSESUB to just clean subscription states on client side.", }; static final String RMSUB = "rmsub"; static final String RMSUB_DESC = "Remove subscriptions for topics"; static final String[] RMSUB_USAGE = new String[] { "usage: rmsub {topic_prefix} {start_topic} {end_topic} {subscriber_prefix} {start_sub} {end_sub}", "", " {topic_prefix} : topic prefix.", " {start_topic} : start topic id.", " {end_topic} : end topic id.", " {subscriber_prefix} : subscriber prefix.", " {start_sub} : start subscriber id.", " {end_sub} : end subscriber id.", }; /* CONSUME: move consume ptr of a subscription with specified steps */ static final String CONSUME = "consume"; static final String CONSUME_DESC = "Move consume ptr of a subscription with sepcified steps"; static final String[] CONSUME_USAGE = new String[] { "usage: consume {topic} {subscriber} {nmsgs}", "", " {topic} : topic name.", " any printable string without spaces.", " {subscriber} : subscriber id.", " any printable string without spaces.", " {nmsgs} : how many messages to move consume ptr.", "", " Example:", " suppose, from zk we know subscriber B consumed topic T to message 10", " [hedwig: (standalone) 1] consume T B 2", " after executed above command, a consume(10+2) request will be sent to hedwig.", "", " NOTE:", " since Hedwig updates subscription consume ptr lazily, so you need to know that", " 1) the consumption ptr read from zookeeper may be stable; ", " 2) after sent the consume request, hedwig may just move ptr in its memory and lazily update it to zookeeper. you may not see the ptr changed when DESCRIBE the topic.", }; /* CONSUMETO: move consume ptr of a subscription to a specified pos */ static final String CONSUMETO = "consumeto"; static final String CONSUMETO_DESC = "Move consume ptr of a subscription to a specified message id"; static final String[] CONSUMETO_USAGE = new String[] { "usage: consumeto {topic} {subscriber} {msg_id}", "", " {topic} : topic name.", " any printable string without spaces.", " {subscriber} : subscriber id.", " any printable string without spaces.", " {msg_id} : message id that consume ptr will be moved to.", " if the message id is less than current consume ptr,", " hedwig will do nothing.", "", " Example:", " suppose, from zk we know subscriber B consumed topic T to message 10", " [hedwig: (standalone) 1] consumeto T B 12", " after executed above command, a consume(12) request will be sent to hedwig.", "", " NOTE:", " since Hedwig updates subscription consume ptr lazily, so you need to know that", " 1) the consumption ptr read from zookeeper may be stable; ", " 2) after sent the consume request, hedwig may just move ptr in its memory and lazily update it to zookeeper. you may not see the ptr changed when DESCRIBE the topic.", }; /* PUBSUB: a healthy checking command to ensure cluster is running */ static final String PUBSUB = "pubsub"; static final String PUBSUB_DESC = "A healthy checking command to ensure hedwig is in running state"; static final String[] PUBSUB_USAGE = new String[] { "usage: pubsub {topic} {subscriber} {timeout_secs} {message}", "", " {topic} : topic name.", " any printable string without spaces.", " {subscriber} : subscriber id.", " any printable string without spaces.", " {timeout_secs} : how long will the subscriber wait for published message.", " {message} : message body.", " remaining arguments are used as message body to publish.", "", " Example:", " [hedwig: (standalone) 1] pubsub TOPIC SUBID 10 TEST_MESSAGS", "", " 1) hw will subscribe topic TOPIC as subscriber SUBID;", " 2) subscriber SUBID will wait a message until 10 seconds;", " 3) hw publishes TEST_MESSAGES to topic TOPIC;", " 4) if subscriber recevied message in 10 secs, it checked that whether the message is published message.", " if true, it will return SUCCESS, otherwise return FAILED.", }; // // List all commands used to admin hedwig // /* SHOW: list all available hub servers or topics */ static final String SHOW = "show"; static final String SHOW_DESC = "list all available hub servers or topics"; static final String[] SHOW_USAGE = new String[] { "usage: show [topics | hubs]", "", " show topics :", " listing all available topics in hedwig.", "", " show hubs :", " listing all available hubs in hedwig.", "", " NOTES:", " 'show topics' will not works when there are millions of topics in hedwig, since we have packetLen limitation fetching data from zookeeper.", }; static final String SHOW_TOPICS = "topics"; static final String SHOW_HUBS = "hubs"; /* DESCRIBE: show the metadata of a topic */ static final String DESCRIBE = "describe"; static final String DESCRIBE_DESC = "show metadata of a topic, including topic owner, persistence info, subscriptions info"; static final String[] DESCRIBE_USAGE = new String[] { "usage: describe topic {topic}", "", " {topic} : topic name.", " any printable string without spaces.", "", " Example: describe topic ttttt", "", " Output:", " ===== Topic Information : ttttt =====", "", " Owner : 98.137.99.27:9875:9876", "", " >>> Persistence Info <<<", " Ledger 54729 [ 1 ~ 59 ]", " Ledger 54731 [ 60 ~ 60 ]", " Ledger 54733 [ 61 ~ 61 ]", "", " >>> Subscription Info <<<", " Subscriber mysub : consumeSeqId: local:50", }; static final String DESCRIBE_TOPIC = "topic"; /* READTOPIC: read messages of a specified topic */ static final String READTOPIC = "readtopic"; static final String READTOPIC_DESC = "read messages of a specified topic"; static final String[] READTOPIC_USAGE = new String[] { "usage: readtopic {topic} [start_msg_id]", "", " {topic} : topic name.", " any printable string without spaces.", " [start_msg_id] : message id that start to read from.", "", " no start_msg_id provided:", " it will start from least_consumed_message_id + 1.", " least_consume_message_id is computed from all its subscribers.", "", " start_msg_id provided:", " it will start from MAX(start_msg_id, least_consumed_message_id).", "", " MESSAGE FORMAT:", "", " ---------- MSGID=LOCAL(51) ----------", " MsgId: LOCAL(51)", " SrcRegion: standalone", " Message:", "", " hello", }; /* FORMAT: format metadata for Hedwig */ static final String FORMAT = "format"; static final String FORMAT_DESC = "format metadata for Hedwig"; static final String[] FORMAT_USAGE = new String[] { "usage: format [-force]", "", " [-force] : Format metadata for Hedwig w/o confirmation.", }; // // List other useful commands // /* SET: set whether printing zk watches or not */ static final String SET = "set"; static final String SET_DESC = "set whether printing zk watches or not"; static final String[] SET_USAGE = EMPTY_ARRAY; /* HISTORY: list history commands */ static final String HISTORY = "history"; static final String HISTORY_DESC = "list history commands"; static final String[] HISTORY_USAGE = EMPTY_ARRAY; /* REDO: redo previous command */ static final String REDO = "redo"; static final String REDO_DESC = "redo history command"; static final String[] REDO_USAGE = new String[] { "usage: redo [{cmdno} | !]", "", " {cmdno} : history command no.", " ! : last command.", }; /* HELP: print usage information of a specified command */ static final String HELP = "help"; static final String HELP_DESC = "print usage information of a specified command"; static final String[] HELP_USAGE = new String[] { "usage: help {command}", "", " {command} : command name", }; static final String QUIT = "quit"; static final String QUIT_DESC = "exit console"; static final String[] QUIT_USAGE = EMPTY_ARRAY; static final String EXIT = "exit"; static final String EXIT_DESC = QUIT_DESC; static final String[] EXIT_USAGE = EMPTY_ARRAY; public static enum COMMAND { CMD_PUB (PUB, PUB_DESC, PUB_USAGE), CMD_SUB (SUB, SUB_DESC, SUB_USAGE), CMD_CLOSESUB (CLOSESUB, CLOSESUB_DESC, CLOSESUB_USAGE), CMD_UNSUB (UNSUB, UNSUB_DESC, UNSUB_USAGE), CMD_RMSUB (RMSUB, RMSUB_DESC, RMSUB_USAGE), CMD_CONSUME (CONSUME, CONSUME_DESC, CONSUME_USAGE), CMD_CONSUMETO (CONSUMETO, CONSUMETO_DESC, CONSUMETO_USAGE), CMD_PUBSUB (PUBSUB, PUBSUB_DESC, PUBSUB_USAGE), CMD_SHOW (SHOW, SHOW_DESC, SHOW_USAGE), CMD_DESCRIBE (DESCRIBE, DESCRIBE_DESC, DESCRIBE_USAGE), CMD_READTOPIC (READTOPIC, READTOPIC_DESC, READTOPIC_USAGE), CMD_FORMAT (FORMAT, FORMAT_DESC, FORMAT_USAGE), CMD_SET (SET, SET_DESC, SET_USAGE), CMD_HISTORY (HISTORY, HISTORY_DESC, HISTORY_USAGE), CMD_REDO (REDO, REDO_DESC, REDO_USAGE), CMD_HELP (HELP, HELP_DESC, HELP_USAGE), CMD_QUIT (QUIT, QUIT_DESC, QUIT_USAGE), CMD_EXIT (EXIT, EXIT_DESC, EXIT_USAGE), // sub commands CMD_SHOW_TOPICS (SHOW_TOPICS, "", EMPTY_ARRAY), CMD_SHOW_HUBS (SHOW_HUBS, "", EMPTY_ARRAY), CMD_DESCRIBE_TOPIC (DESCRIBE_TOPIC, "", EMPTY_ARRAY); COMMAND(String name, String desc, String[] usage) { this.name = name; this.desc = desc; this.usage = usage; this.subCmds = new LinkedHashMap(); } public String getName() { return name; } public String getDescription() { return desc; } public Map getSubCommands() { return subCmds; } public void addSubCommand(COMMAND c) { this.subCmds.put(c.name, c); }; public void printUsage() { System.err.println(name + ": " + desc); for(String line : usage) { System.err.println(line); } System.err.println(); } protected String name; protected String desc; protected String[] usage; protected Map subCmds; } static Map commands = null; private static void addCommand(COMMAND c) { commands.put(c.getName(), c); } static synchronized void init() { if (commands != null) { return; } commands = new LinkedHashMap(); addCommand(COMMAND.CMD_PUB); addCommand(COMMAND.CMD_SUB); addCommand(COMMAND.CMD_CLOSESUB); addCommand(COMMAND.CMD_UNSUB); addCommand(COMMAND.CMD_RMSUB); addCommand(COMMAND.CMD_CONSUME); addCommand(COMMAND.CMD_CONSUMETO); addCommand(COMMAND.CMD_PUBSUB); // show COMMAND.CMD_SHOW.addSubCommand(COMMAND.CMD_SHOW_TOPICS); COMMAND.CMD_SHOW.addSubCommand(COMMAND.CMD_SHOW_HUBS); addCommand(COMMAND.CMD_SHOW); // describe COMMAND.CMD_DESCRIBE.addSubCommand(COMMAND.CMD_DESCRIBE_TOPIC); addCommand(COMMAND.CMD_DESCRIBE); addCommand(COMMAND.CMD_READTOPIC); addCommand(COMMAND.CMD_FORMAT); addCommand(COMMAND.CMD_SET); addCommand(COMMAND.CMD_HISTORY); addCommand(COMMAND.CMD_REDO); addCommand(COMMAND.CMD_HELP); addCommand(COMMAND.CMD_QUIT); addCommand(COMMAND.CMD_EXIT); } public static Map getHedwigCommands() { return commands; } /** * Find candidate commands by the specified token list * * @param token token list * * @return list of candidate commands */ public static List findCandidateCommands(String[] tokens) { List cmds = new LinkedList(); Map cmdMap = commands; for (int i=0; i<(tokens.length - 1); i++) { COMMAND c = cmdMap.get(tokens[i]); // no commands if (c == null || c.getSubCommands().size() <= 0) { return cmds; } else { cmdMap = c.getSubCommands(); } } cmds.addAll(cmdMap.keySet()); return cmds; } } HedwigConsole.java000066400000000000000000001064261244507361200340400ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/admin/console/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.admin.console; import jline.ConsoleReader; import jline.History; import jline.Terminal; import java.io.BufferedReader; import java.io.File; import java.io.IOException; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.Arrays; import java.util.HashMap; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.NoSuchElementException; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.bookkeeper.util.MathUtils; import org.apache.commons.configuration.ConfigurationException; import org.apache.hedwig.admin.HedwigAdmin; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRange; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.topics.HubInfo; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.hedwig.util.SubscriptionListener; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import static org.apache.hedwig.admin.console.HedwigCommands.*; import static org.apache.hedwig.admin.console.HedwigCommands.COMMAND.*; /** * Console Client to Hedwig */ public class HedwigConsole { private static final Logger LOG = LoggerFactory.getLogger(HedwigConsole.class); // NOTE: now it is fixed passwd in bookkeeper static byte[] passwd = "sillysecret".getBytes(); // history file name static final String HW_HISTORY_FILE = ".hw_history"; static final char[] CONTINUE_OR_QUIT = new char[] { 'Q', 'q', '\n' }; protected MyCommandOptions cl = new MyCommandOptions(); protected HashMap history = new LinkedHashMap(); protected int commandCount = 0; protected boolean printWatches = true; protected Map myCommands; protected boolean inConsole = true; protected ConsoleReader console = null; protected HedwigAdmin admin; protected HedwigClient hubClient; protected Publisher publisher; protected Subscriber subscriber; protected ConsoleMessageHandler consoleHandler = new ConsoleMessageHandler(); protected Terminal terminal; protected String myRegion; interface MyCommand { boolean runCmd(String[] args) throws Exception; } static class HelpCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { boolean printUsage = true; if (args.length >= 2) { String command = args[1]; COMMAND c = getHedwigCommands().get(command); if (c != null) { c.printUsage(); printUsage = false; } } if (printUsage) { usage(); } return true; } } class ExitCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { printMessage("Quitting ..."); hubClient.close(); admin.close(); Runtime.getRuntime().exit(0); return true; } } class RedoCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 2) { return false; } int index; if ("!".equals(args[1])) { index = commandCount - 1; } else { index = Integer.decode(args[1]); if (commandCount <= index) { System.err.println("Command index out of range"); return false; } } cl.parseCommand(history.get(index)); if (cl.getCommand().equals("redo")) { System.err.println("No redoing redos"); return false; } history.put(commandCount, history.get(index)); processCmd(cl); return true; } } class HistoryCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { for (int i=commandCount - 10; i<=commandCount; ++i) { if (i < 0) { continue; } System.out.println(i + " - " + history.get(i)); } return true; } } class SetCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 3 || !"printwatches".equals(args[1])) { return false; } else if (args.length == 2) { System.out.println("printwatches is " + (printWatches ? "on" : "off")); } else { printWatches = args[2].equals("on"); } return true; } } class PubCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 3) { return false; } ByteString topic = ByteString.copyFromUtf8(args[1]); StringBuilder sb = new StringBuilder(); for (int i=2; i callback, Object context) { System.out.println("Received message from topic " + topic.toStringUtf8() + " for subscriber " + subscriberId.toStringUtf8() + " : " + msg.getBody().toStringUtf8()); callback.operationFinished(context, null); } } static class ConsoleSubscriptionListener implements SubscriptionListener { @Override public void processEvent(ByteString t, ByteString s, SubscriptionEvent event) { System.out.println("Subscription Channel for (topic:" + t.toStringUtf8() + ", subscriber:" + s.toStringUtf8() + ") received event : " + event); } } class SubCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { CreateOrAttach mode; boolean receive = true; if (args.length < 3) { return false; } else if (args.length == 3) { mode = CreateOrAttach.ATTACH; receive = true; } else { try { mode = CreateOrAttach.valueOf(Integer.parseInt(args[3])); } catch (Exception e) { System.err.println("Unknow mode : " + args[3]); return false; } if (args.length >= 5) { try { receive = Boolean.parseBoolean(args[4]); } catch (Exception e) { receive = false; } } } if (mode == null) { System.err.println("Unknow mode : " + args[3]); return false; } ByteString topic = ByteString.copyFromUtf8(args[1]); ByteString subId = ByteString.copyFromUtf8(args[2]); try { SubscriptionOptions options = SubscriptionOptions.newBuilder().setCreateOrAttach(mode) .setForceAttach(false).build(); subscriber.subscribe(topic, subId, options); if (receive) { subscriber.startDelivery(topic, subId, consoleHandler); System.out.println("SUB DONE AND RECEIVE"); } else { System.out.println("SUB DONE BUT NOT RECEIVE"); } } catch (Exception e) { System.err.println("SUB FAILED"); e.printStackTrace(); } return true; } } class UnsubCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 3) { return false; } ByteString topic = ByteString.copyFromUtf8(args[1]); ByteString subId = ByteString.copyFromUtf8(args[2]); try { subscriber.stopDelivery(topic, subId); subscriber.unsubscribe(topic, subId); System.out.println("UNSUB DONE"); } catch (Exception e) { System.err.println("UNSUB FAILED"); e.printStackTrace(); } return true; } } class RmsubCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 7) { return false; } String topicPrefix = args[1]; int startTopic = Integer.parseInt(args[2]); int endTopic = Integer.parseInt(args[3]); String subPrefix = args[4]; int startSub = Integer.parseInt(args[5]); int endSub = Integer.parseInt(args[6]); if (startTopic > endTopic || endSub < startSub) { return false; } for (int i=startTopic; i<=endTopic; i++) { ByteString topic = ByteString.copyFromUtf8(topicPrefix + i); try { for (int j=startSub; j<=endSub; j++) { ByteString sub = ByteString.copyFromUtf8(subPrefix + j); subscriber.subscribe(topic, sub, CreateOrAttach.CREATE_OR_ATTACH); subscriber.unsubscribe(topic, sub); } System.out.println("RMSUB " + topic.toStringUtf8() + " DONE"); } catch (Exception e) { System.err.println("RMSUB " + topic.toStringUtf8() + " FAILED"); e.printStackTrace(); } } return true; } } class CloseSubscriptionCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 3) { return false; } ByteString topic = ByteString.copyFromUtf8(args[1]); ByteString sudId = ByteString.copyFromUtf8(args[2]); try { subscriber.stopDelivery(topic, sudId); subscriber.closeSubscription(topic, sudId); } catch (Exception e) { System.err.println("CLOSESUB FAILED"); } return true; } } class ConsumeToCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 4) { return false; } ByteString topic = ByteString.copyFromUtf8(args[1]); ByteString subId = ByteString.copyFromUtf8(args[2]); long msgId = Long.parseLong(args[3]); MessageSeqId consumeId = MessageSeqId.newBuilder().setLocalComponent(msgId).build(); try { subscriber.consume(topic, subId, consumeId); } catch (Exception e) { System.err.println("CONSUMETO FAILED"); } return true; } } class ConsumeCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 4) { return false; } long lastConsumedId = 0; SubscriptionData subData = admin.getSubscription(ByteString.copyFromUtf8(args[1]), ByteString.copyFromUtf8(args[2])); if (null == subData) { System.err.println("Failed to read subscription for topic: " + args[1] + " subscriber: " + args[2]); return true; } lastConsumedId = subData.getState().getMsgId().getLocalComponent(); long numMessagesToConsume = Long.parseLong(args[3]); long idToConsumed = lastConsumedId + numMessagesToConsume; System.out.println("Try to move subscriber(" + args[2] + ") consume ptr of topic(" + args[1] + ") from " + lastConsumedId + " to " + idToConsumed); MessageSeqId consumeId = MessageSeqId.newBuilder().setLocalComponent(idToConsumed).build(); ByteString topic = ByteString.copyFromUtf8(args[1]); ByteString subId = ByteString.copyFromUtf8(args[2]); try { subscriber.consume(topic, subId, consumeId); } catch (Exception e) { System.err.println("CONSUME FAILED"); } return true; } } class PubSubCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 5) { return false; } final long startTime = MathUtils.now(); final ByteString topic = ByteString.copyFromUtf8(args[1]); final ByteString subId = ByteString.copyFromUtf8(args[2] + "-" + startTime); int timeoutSecs = 60; try { timeoutSecs = Integer.parseInt(args[3]); } catch (NumberFormatException nfe) { } StringBuilder sb = new StringBuilder(); for (int i=4; i callback, Object context) { if (thisTopic.equals(topic) && subscriberId.equals(subId) && msg.getBody().equals(message.getBody())) { System.out.println("Received message : " + message.getBody().toStringUtf8()); isDone.countDown(); } callback.operationFinished(context, null); } }); // wait for the message success = isDone.await(timeoutSecs, TimeUnit.SECONDS); elapsedTime = MathUtils.now() - startTime; } finally { try { if (subscribed) { subscriber.stopDelivery(topic, subId); subscriber.unsubscribe(topic, subId); } } finally { if (success) { System.out.println("PUBSUB SUCCESS. TIME: " + elapsedTime + " MS"); } else { System.out.println("PUBSUB FAILED. "); } return success; } } } } class ReadTopicCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 2) { return false; } ReadTopic rt; ByteString topic = ByteString.copyFromUtf8(args[1]); if (args.length == 2) { rt = new ReadTopic(admin, topic, inConsole); } else { rt = new ReadTopic(admin, topic, Long.parseLong(args[2]), inConsole); } rt.readTopic(); return true; } } class ShowCmd implements MyCommand { static final int MAX_TOPICS_PER_SHOW = 100; @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 2) { return false; } String errorMsg = null; try { if (HedwigCommands.SHOW_HUBS.equals(args[1])) { errorMsg = "Unable to fetch the list of hub servers"; showHubs(); } else if (HedwigCommands.SHOW_TOPICS.equals(args[1])) { errorMsg = "Unable to fetch the list of topics"; showTopics(); } else { System.err.println("ERROR: Unknown show command '" + args[1] + "'"); return false; } } catch (Exception e) { if (null != errorMsg) { System.err.println(errorMsg); } e.printStackTrace(); } return true; } protected void showHubs() throws Exception { Map hubs = admin.getAvailableHubs(); System.out.println("Available Hub Servers:"); for (Map.Entry entry : hubs.entrySet()) { System.out.println("\t" + entry.getKey() + " :\t" + entry.getValue()); } } protected void showTopics() throws Exception { List topics = new ArrayList(); Iterator iter = admin.getTopics(); System.out.println("Topic List:"); boolean stop = false; while (iter.hasNext()) { if (topics.size() >= MAX_TOPICS_PER_SHOW) { System.out.println(topics); topics.clear(); stop = !continueOrQuit(); if (stop) { break; } } ByteString t = iter.next(); topics.add(t.toStringUtf8()); } if (!stop) { System.out.println(topics); } } } class DescribeCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { if (args.length < 3) { return false; } if (HedwigCommands.DESCRIBE_TOPIC.equals(args[1])) { return describeTopic(args[2]); } else { return false; } } protected boolean describeTopic(String topic) throws Exception { ByteString btopic = ByteString.copyFromUtf8(topic); HubInfo owner = admin.getTopicOwner(btopic); List ranges = admin.getTopicLedgers(btopic); Map states = admin.getTopicSubscriptions(btopic); System.out.println("===== Topic Information : " + topic + " ====="); System.out.println(); System.out.println("Owner : " + (owner == null ? "NULL" : owner.toString().trim().replaceAll("\n", ", "))); System.out.println(); // print ledgers printTopicLedgers(ranges); // print subscriptions printTopicSubscriptions(states); return true; } private void printTopicLedgers(List ranges) { System.out.println(">>> Persistence Info <<<"); if (null == ranges) { System.out.println("N/A"); return; } if (ranges.isEmpty()) { System.out.println("No Ledger used."); return; } for (LedgerRange range : ranges) { System.out.println("Ledger " + range.getLedgerId() + " [ " + range.getStartSeqIdIncluded() + " ~ " + range.getEndSeqIdIncluded().getLocalComponent() + " ]"); } System.out.println(); } private void printTopicSubscriptions(Map states) { System.out.println(">>> Subscription Info <<<"); if (0 == states.size()) { System.out.println("No subscriber."); return; } for (Map.Entry entry : states.entrySet()) { System.out.println("Subscriber " + entry.getKey().toStringUtf8() + " : " + SubscriptionStateUtils.toString(entry.getValue())); } System.out.println(); } } class FormatCmd implements MyCommand { @Override public boolean runCmd(String[] args) throws Exception { boolean force = false; if (args.length >= 2 && "-force".equals(args[1])) { force = true; } boolean doFormat = true; System.out.println("You ask to format hedwig metadata stored in " + admin.getMetadataManagerFactory().getClass().getName() + "."); if (!force) { doFormat = continueOrQuit(); } if (doFormat) { admin.format(); System.out.println("Formatted hedwig metadata successfully."); } else { System.out.println("Given up formatting hedwig metadata."); } return true; } } protected Map buildMyCommands() { Map cmds = new HashMap(); ExitCmd exitCmd = new ExitCmd(); cmds.put(EXIT, exitCmd); cmds.put(QUIT, exitCmd); cmds.put(HELP, new HelpCmd()); cmds.put(HISTORY, new HistoryCmd()); cmds.put(REDO, new RedoCmd()); cmds.put(SET, new SetCmd()); cmds.put(PUB, new PubCmd()); cmds.put(SUB, new SubCmd()); cmds.put(PUBSUB, new PubSubCmd()); cmds.put(CLOSESUB, new CloseSubscriptionCmd()); cmds.put(UNSUB, new UnsubCmd()); cmds.put(RMSUB, new RmsubCmd()); cmds.put(CONSUME, new ConsumeCmd()); cmds.put(CONSUMETO, new ConsumeToCmd()); cmds.put(SHOW, new ShowCmd()); cmds.put(DESCRIBE, new DescribeCmd()); cmds.put(READTOPIC, new ReadTopicCmd()); cmds.put(FORMAT, new FormatCmd()); return cmds; } static void usage() { System.err.println("HedwigConsole [options] [command] [args]"); System.err.println(); System.err.println("Avaiable commands:"); for (String cmd : getHedwigCommands().keySet()) { System.err.println("\t" + cmd); } System.err.println(); } /** * A storage class for both command line options and shell commands. */ static private class MyCommandOptions { private Map options = new HashMap(); private List cmdArgs = null; private String command = null; public MyCommandOptions() { } public String getOption(String opt) { return options.get(opt); } public String getCommand( ) { return command; } public String getCmdArgument( int index ) { return cmdArgs.get(index); } public int getNumArguments( ) { return cmdArgs.size(); } public String[] getArgArray() { return cmdArgs.toArray(new String[0]); } /** * Parses a command line that may contain one or more flags * before an optional command string * @param args command line arguments * @return true if parsing succeeded, false otherwise. */ public boolean parseOptions(String[] args) { List argList = Arrays.asList(args); Iterator it = argList.iterator(); while (it.hasNext()) { String opt = it.next(); if (!opt.startsWith("-")) { command = opt; cmdArgs = new ArrayList( ); cmdArgs.add( command ); while (it.hasNext()) { cmdArgs.add(it.next()); } return true; } else { try { options.put(opt.substring(1), it.next()); } catch (NoSuchElementException e) { System.err.println("Error: no argument found for option " + opt); return false; } } } return true; } /** * Breaks a string into command + arguments. * @param cmdstring string of form "cmd arg1 arg2..etc" * @return true if parsing succeeded. */ public boolean parseCommand( String cmdstring ) { String[] args = cmdstring.split(" "); if (args.length == 0){ return false; } command = args[0]; cmdArgs = Arrays.asList(args); return true; } } private class MyWatcher implements Watcher { public void process(WatchedEvent event) { if (getPrintWatches()) { printMessage("WATCHER::"); printMessage(event.toString()); } } } public void printMessage(String msg) { if (inConsole) { System.out.println("\n"+msg); } } /** * Hedwig Console * * @param args arguments * @throws IOException * @throws InterruptedException */ public HedwigConsole(String[] args) throws IOException, InterruptedException { // Setup Terminal terminal = Terminal.setupTerminal(); HedwigCommands.init(); cl.parseOptions(args); if (cl.getCommand() == null) { inConsole = true; } else { inConsole = false; } org.apache.bookkeeper.conf.ClientConfiguration bkClientConf = new org.apache.bookkeeper.conf.ClientConfiguration(); ServerConfiguration hubServerConf = new ServerConfiguration(); String serverCfgFile = cl.getOption("server-cfg"); if (serverCfgFile != null) { try { hubServerConf.loadConf(new File(serverCfgFile).toURI().toURL()); } catch (ConfigurationException e) { throw new IOException(e); } try { bkClientConf.loadConf(new File(serverCfgFile).toURI().toURL()); } catch (ConfigurationException e) { throw new IOException(e); } } ClientConfiguration hubClientCfg = new ClientConfiguration(); String clientCfgFile = cl.getOption("client-cfg"); if (clientCfgFile != null) { try { hubClientCfg.loadConf(new File(clientCfgFile).toURI().toURL()); } catch (ConfigurationException e) { throw new IOException(e); } } printMessage("Connecting to zookeeper/bookkeeper using HedwigAdmin"); try { admin = new HedwigAdmin(bkClientConf, hubServerConf); admin.getZkHandle().register(new MyWatcher()); } catch (Exception e) { throw new IOException(e); } printMessage("Connecting to default hub server " + hubClientCfg.getDefaultServerHost()); hubClient = new HedwigClient(hubClientCfg); publisher = hubClient.getPublisher(); subscriber = hubClient.getSubscriber(); subscriber.addSubscriptionListener(new ConsoleSubscriptionListener()); // other parameters myRegion = hubServerConf.getMyRegion(); } public boolean getPrintWatches() { return printWatches; } protected String getPrompt() { StringBuilder sb = new StringBuilder(); sb.append("[hedwig: (").append(myRegion).append(") ").append(commandCount).append("] "); return sb.toString(); } protected boolean continueOrQuit() throws IOException { System.out.println("Press to continue, or Q to cancel ..."); int ch; if (null != console) { ch = console.readCharacter(CONTINUE_OR_QUIT); } else { do { ch = terminal.readCharacter(System.in); } while (ch != 'q' && ch != 'Q' && ch != '\n'); } if (ch == 'q' || ch == 'Q') { return false; } return true; } protected void addToHistory(int i, String cmd) { history.put(i, cmd); } public void executeLine(String line) { if (!line.equals("")) { cl.parseCommand(line); addToHistory(commandCount, line); processCmd(cl); commandCount++; } } protected boolean processCmd(MyCommandOptions co) { String[] args = co.getArgArray(); String cmd = co.getCommand(); if (args.length < 1) { usage(); return false; } if (!getHedwigCommands().containsKey(cmd)) { usage(); return false; } LOG.debug("Processing {}", cmd); MyCommand myCommand = myCommands.get(cmd); if (myCommand == null) { System.err.println("No Command Processor found for command " + cmd); usage(); return false; } long startTime = MathUtils.now(); boolean success = false; try { success = myCommand.runCmd(args); } catch (Exception e) { e.printStackTrace(); success = false; } long elapsedTime = MathUtils.now() - startTime; if (inConsole) { if (success) { System.out.println("Finished " + ((double)elapsedTime / 1000) + " s."); } else { COMMAND c = getHedwigCommands().get(cmd); if (c != null) { c.printUsage(); } } } return success; } @SuppressWarnings("unchecked") void run() throws IOException { inConsole = true; myCommands = buildMyCommands(); if (cl.getCommand() == null) { System.out.println("Welcome to Hedwig!"); System.out.println("JLine support is enabled"); console = new ConsoleReader(); JLineHedwigCompletor completor = new JLineHedwigCompletor(admin); console.addCompletor(completor); // load history file History history = new History(); File file = new File(System.getProperty("hw.history", new File(System.getProperty("user.home"), HW_HISTORY_FILE).toString())); if (LOG.isDebugEnabled()) { LOG.debug("History file is " + file.toString()); } history.setHistoryFile(file); // set history to console reader console.setHistory(history); // load history from history file history.moveToFirstEntry(); while (history.next()) { String entry = history.current(); if (!entry.equals("")) { addToHistory(commandCount, entry); } commandCount++; } System.out.println("JLine history support is enabled"); String line; while ((line = console.readLine(getPrompt())) != null) { executeLine(line); history.addToHistory(line); } } inConsole = false; processCmd(cl); try { myCommands.get(EXIT).runCmd(new String[0]); } catch (Exception e) { } } public static void main(String[] args) throws IOException, InterruptedException { HedwigConsole console = new HedwigConsole(args); console.run(); } } JLineHedwigCompletor.java000066400000000000000000000072211244507361200353150ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/admin/console/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.admin.console; import java.util.Iterator; import java.util.List; import org.apache.zookeeper.KeeperException; import org.apache.hedwig.admin.HedwigAdmin; import com.google.protobuf.ByteString; import jline.Completor; import static org.apache.hedwig.admin.console.HedwigCommands.*; /** * A jline completor for hedwig console */ public class JLineHedwigCompletor implements Completor { // for topic completion static final int MAX_TOPICS_TO_SEARCH = 1000; private HedwigAdmin admin; public JLineHedwigCompletor(HedwigAdmin admin) { this.admin = admin; } @Override public int complete(String buffer, int cursor, List candidates) { // Guarantee that the final token is the one we're expanding buffer = buffer.substring(0,cursor); String[] tokens = buffer.split(" "); if (buffer.endsWith(" ")) { String[] newTokens = new String[tokens.length + 1]; System.arraycopy(tokens, 0, newTokens, 0, tokens.length); newTokens[newTokens.length - 1] = ""; tokens = newTokens; } if (tokens.length > 2 && DESCRIBE.equalsIgnoreCase(tokens[0]) && DESCRIBE_TOPIC.equalsIgnoreCase(tokens[1])) { return completeTopic(buffer, tokens[2], candidates); } else if (tokens.length > 1 && (SUB.equalsIgnoreCase(tokens[0]) || PUB.equalsIgnoreCase(tokens[0]) || CLOSESUB.equalsIgnoreCase(tokens[0]) || CONSUME.equalsIgnoreCase(tokens[0]) || CONSUMETO.equalsIgnoreCase(tokens[0]) || READTOPIC.equalsIgnoreCase(tokens[0]))) { return completeTopic(buffer, tokens[1], candidates); } List cmds = HedwigCommands.findCandidateCommands(tokens); return completeCommand(buffer, tokens[tokens.length - 1], cmds, candidates); } private int completeCommand(String buffer, String token, List commands, List candidates) { for (String cmd : commands) { if (cmd.startsWith(token)) { candidates.add(cmd); } } return buffer.lastIndexOf(" ") + 1; } private int completeTopic(String buffer, String token, List candidates) { try { Iterator children = admin.getTopics(); int i = 0; while (children.hasNext() && i <= MAX_TOPICS_TO_SEARCH) { String child = children.next().toStringUtf8(); if (child.startsWith(token)) { candidates.add(child); } ++i; } } catch (Exception e) { return buffer.length(); } return candidates.size() == 0 ? buffer.length() : buffer.lastIndexOf(" ") + 1; } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/admin/console/ReadTopic.java000066400000000000000000000273311244507361200332340ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.admin.console; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.Enumeration; import java.util.Iterator; import java.util.List; import java.util.Map; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.hedwig.admin.HedwigAdmin; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRange; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.RegionSpecificSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.data.Stat; import com.google.protobuf.ByteString; import com.google.protobuf.InvalidProtocolBufferException; /** * A tool to read topic messages. * * This tool : * 1) read persistence info from zookeeper: ledger ranges * 2) read subscription infor from zookeeper: we can know the least message id (ledger id) * 3) use bk client to read message starting from least message id */ public class ReadTopic { final HedwigAdmin admin; final ByteString topic; long startSeqId; long leastConsumedSeqId = Long.MAX_VALUE; final boolean inConsole; static final int RC_OK = 0; static final int RC_ERROR = -1; static final int RC_NOTOPIC = -2; static final int RC_NOLEDGERS = -3; static final int RC_NOSUBSCRIBERS = -4; static final int NUM_MESSAGES_TO_PRINT = 15; List ledgers = new ArrayList(); /** * Constructor */ public ReadTopic(HedwigAdmin admin, ByteString topic, boolean inConsole) { this(admin, topic, 1, inConsole); } /** * Constructor */ public ReadTopic(HedwigAdmin admin, ByteString topic, long msgSeqId, boolean inConsole) { this.admin = admin; this.topic = topic; this.startSeqId = msgSeqId; this.inConsole = inConsole; } /** * Check whether the topic existed or not * * @return RC_OK if topic is existed; RC_NOTOPIC if not. * @throws Exception */ protected int checkTopic() throws Exception { return admin.hasTopic(topic) ? RC_OK : RC_NOTOPIC; } /** * Get the ledgers used by this topic to store messages * * @return RC_OK if topic has messages; RC_NOLEDGERS if not. * @throws Exception */ protected int getTopicLedgers() throws Exception { List ranges = admin.getTopicLedgers(topic); if (null == ranges || ranges.isEmpty()) { return RC_NOLEDGERS; } ledgers.addAll(ranges); return RC_OK; } protected int getLeastSubscription() throws Exception { Map states = admin.getTopicSubscriptions(topic); if (states.isEmpty()) { return RC_NOSUBSCRIBERS; } for (Map.Entry entry : states.entrySet()) { SubscriptionData state = entry.getValue(); long localMsgId = state.getState().getMsgId().getLocalComponent(); if (localMsgId < leastConsumedSeqId) { leastConsumedSeqId = localMsgId; } } if (leastConsumedSeqId == Long.MAX_VALUE) { leastConsumedSeqId = 0; } return RC_OK; } public void readTopic() { try { int rc = _readTopic(); switch (rc) { case RC_NOTOPIC: System.err.println("No topic " + topic + " found."); break; case RC_NOLEDGERS: System.err.println("No message is published to topic " + topic); break; default: break; } } catch (Exception e) { System.err.println("ERROR: read messages of topic " + topic + " failed."); e.printStackTrace(); } } protected int _readTopic() throws Exception { int rc; // check topic rc = checkTopic(); if (RC_OK != rc) { return rc; } // get topic ledgers rc = getTopicLedgers(); if (RC_OK != rc) { return rc; } // get topic subscription to find the least one rc = getLeastSubscription(); if (RC_NOSUBSCRIBERS == rc) { startSeqId = 1; } else if (RC_OK == rc) { if (leastConsumedSeqId > startSeqId) { startSeqId = leastConsumedSeqId + 1; } } else { return rc; } for (LedgerRange range : ledgers) { long endSeqId = range.getEndSeqIdIncluded().getLocalComponent(); if (endSeqId < startSeqId) { continue; } boolean toContinue = readLedger(range); startSeqId = endSeqId + 1; if (!toContinue) { break; } } return RC_OK; } /** * Read a specific ledger * * @param ledger in memory ledger range * @param endSeqId end seq id * @return true if continue, otherwise false * @throws BKException * @throws IOException * @throws InterruptedException */ protected boolean readLedger(LedgerRange ledger) throws BKException, IOException, InterruptedException { long tEndSeqId = ledger.getEndSeqIdIncluded().getLocalComponent(); if (tEndSeqId < this.startSeqId) { return true; } // Open Ledger Handle long ledgerId = ledger.getLedgerId(); System.out.println("\n>>>>> " + ledger + " <<<<<\n"); LedgerHandle lh = null; try { lh = admin.getBkHandle().openLedgerNoRecovery(ledgerId, admin.getBkDigestType(), admin.getBkPasswd()); } catch (BKException e) { System.err.println("ERROR: No ledger " + ledgerId + " found. maybe garbage collected due to the messages are consumed."); } if (null == lh) { return true; } long expectedEntryId = startSeqId - ledger.getStartSeqIdIncluded(); long correctedEndSeqId = tEndSeqId; try { while (startSeqId <= tEndSeqId) { correctedEndSeqId = Math.min(startSeqId + NUM_MESSAGES_TO_PRINT - 1, tEndSeqId); try { Enumeration seq = lh.readEntries(startSeqId - ledger.getStartSeqIdIncluded(), correctedEndSeqId - ledger.getStartSeqIdIncluded()); LedgerEntry entry = null; while (seq.hasMoreElements()) { entry = seq.nextElement(); Message message; try { message = Message.parseFrom(entry.getEntryInputStream()); } catch (IOException e) { System.out.println("WARN: Unreadable message found\n"); expectedEntryId++; continue; } if (expectedEntryId != entry.getEntryId() || (message.getMsgId().getLocalComponent() - ledger.getStartSeqIdIncluded()) != expectedEntryId) { throw new IOException("ERROR: Message ids are out of order : expected entry id " + expectedEntryId + ", current entry id " + entry.getEntryId() + ", msg seq id " + message.getMsgId().getLocalComponent()); } expectedEntryId++; formatMessage(message); } startSeqId = correctedEndSeqId + 1; if (inConsole) { if (!pressKeyToContinue()) { return false; } } } catch (BKException.BKReadException be) { throw be; } } } catch (BKException bke) { if (tEndSeqId != Long.MAX_VALUE) { System.err.println("ERROR: ledger " + ledgerId + " may be corrupted, since read messages [" + startSeqId + " ~ " + correctedEndSeqId + " ] failed :"); throw bke; } } System.out.println("\n"); return true; } protected void formatMessage(Message message) { // print msg id String msgId; if (!message.hasMsgId()) { msgId = "N/A"; } else { MessageSeqId seqId = message.getMsgId(); StringBuilder idBuilder = new StringBuilder(); if (seqId.hasLocalComponent()) { idBuilder.append("LOCAL(").append(seqId.getLocalComponent()).append(")"); } else { List remoteIds = seqId.getRemoteComponentsList(); int i = 0, numRegions = remoteIds.size(); idBuilder.append("REMOTE("); for (RegionSpecificSeqId rssid : remoteIds) { idBuilder.append(rssid.getRegion().toStringUtf8()); idBuilder.append("["); idBuilder.append(rssid.getSeqId()); idBuilder.append("]"); ++i; if (i < numRegions) { idBuilder.append(","); } } idBuilder.append(")"); } msgId = idBuilder.toString(); } System.out.println("---------- MSGID=" + msgId + " ----------"); System.out.println("MsgId: " + msgId); // print source region if (message.hasSrcRegion()) { System.out.println("SrcRegion: " + message.getSrcRegion().toStringUtf8()); } else { System.out.println("SrcRegion: N/A"); } // print message body System.out.println("Message:"); System.out.println(); if (message.hasBody()) { System.out.println(message.getBody().toStringUtf8()); } else { System.out.println("N/A"); } System.out.println(); } boolean pressKeyToContinue() throws IOException { System.out.println("Press Y to continue..."); BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in)); int ch = stdin.read(); if (ch == 'y' || ch == 'Y') { return true; } return false; } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/data/000077500000000000000000000000001244507361200266705ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/data/MessageFormatter.java000066400000000000000000000104511244507361200330040ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.hedwig.data; import java.io.IOException; import java.util.List; import org.apache.bookkeeper.util.EntryFormatter; import org.apache.commons.configuration.Configuration; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.RegionSpecificSeqId; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Format a pub sub message into a readable format. */ public class MessageFormatter extends EntryFormatter { static Logger logger = LoggerFactory.getLogger(MessageFormatter.class); static final String MESSAGE_PAYLOAD_FORMATTER_CLASS = "message_payload_formatter_class"; EntryFormatter dataFormatter = EntryFormatter.STRING_FORMATTER; @Override public void setConf(Configuration conf) { super.setConf(conf); dataFormatter = EntryFormatter.newEntryFormatter(conf, MESSAGE_PAYLOAD_FORMATTER_CLASS); } @Override public void formatEntry(java.io.InputStream input) { Message message; try { message = Message.parseFrom(input); } catch (IOException e) { System.out.println("WARN: Unreadable message found\n"); EntryFormatter.STRING_FORMATTER.formatEntry(input); return; } formatMessage(message); } @Override public void formatEntry(byte[] data) { Message message; try { message = Message.parseFrom(data); } catch (IOException e) { System.out.println("WARN: Unreadable message found\n"); EntryFormatter.STRING_FORMATTER.formatEntry(data); return; } formatMessage(message); } void formatMessage(Message message) { // print msg id String msgId; if (!message.hasMsgId()) { msgId = "N/A"; } else { MessageSeqId seqId = message.getMsgId(); StringBuilder idBuilder = new StringBuilder(); if (seqId.hasLocalComponent()) { idBuilder.append("LOCAL(").append(seqId.getLocalComponent()).append(")"); } else { List remoteIds = seqId.getRemoteComponentsList(); int i = 0, numRegions = remoteIds.size(); idBuilder.append("REMOTE("); for (RegionSpecificSeqId rssid : remoteIds) { idBuilder.append(rssid.getRegion().toStringUtf8()); idBuilder.append("["); idBuilder.append(rssid.getSeqId()); idBuilder.append("]"); ++i; if (i < numRegions) { idBuilder.append(","); } } idBuilder.append(")"); } msgId = idBuilder.toString(); } System.out.println("****** MSGID=" + msgId + " ******"); System.out.println("MessageId: " + msgId); // print source region if (message.hasSrcRegion()) { System.out.println("SrcRegion: " + message.getSrcRegion().toStringUtf8()); } else { System.out.println("SrcRegion: N/A"); } // print message body if (message.hasBody()) { System.out.println("Body:"); dataFormatter.formatEntry(message.getBody().toByteArray()); } else { System.out.println("Body: N/A"); } System.out.println(); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/000077500000000000000000000000001244507361200272655ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/benchmark/000077500000000000000000000000001244507361200312175ustar00rootroot00000000000000AbstractBenchmark.java000066400000000000000000000065511244507361200353700ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/benchmark/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.benchmark; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.Semaphore; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.bookkeeper.util.MathUtils; import org.apache.hedwig.util.ConcurrencyUtils; public abstract class AbstractBenchmark { static final Logger logger = LoggerFactory.getLogger(AbstractBenchmark.class); AtomicLong totalLatency = new AtomicLong(); LinkedBlockingQueue doneSignalQueue = new LinkedBlockingQueue(); abstract void doOps(int numOps) throws Exception; abstract void tearDown() throws Exception; protected class AbstractCallback { AtomicInteger numDone = new AtomicInteger(0); Semaphore outstanding; int numOps; boolean logging; public AbstractCallback(Semaphore outstanding, int numOps) { this.outstanding = outstanding; this.numOps = numOps; logging = Boolean.getBoolean("progress"); } public void handle(boolean success, Object ctx) { outstanding.release(); if (!success) { ConcurrencyUtils.put(doneSignalQueue, false); return; } totalLatency.addAndGet(MathUtils.now() - (Long)ctx); int numDoneInt = numDone.incrementAndGet(); if (logging && numDoneInt % 10000 == 0) { logger.info("Finished " + numDoneInt + " ops"); } if (numOps == numDoneInt) { ConcurrencyUtils.put(doneSignalQueue, true); } } } public void runPhase(String phase, int numOps) throws Exception { long startTime = MathUtils.now(); doOps(numOps); if (!doneSignalQueue.take()) { logger.error("One or more operations failed in phase: " + phase); throw new RuntimeException(); } else { logger.info("Phase: " + phase + " Avg latency : " + totalLatency.get() / numOps + ", tput = " + (numOps * 1000/ (MathUtils.now() - startTime))); } } public void run() throws Exception { int numWarmup = Integer.getInteger("nWarmup", 50000); runPhase("warmup", numWarmup); logger.info("Sleeping for 10 seconds"); Thread.sleep(10000); //reset latency totalLatency.set(0); int numOps = Integer.getInteger("nOps", 400000); runPhase("real", numOps); tearDown(); } } BookieBenchmark.java000066400000000000000000000075541244507361200350410ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/benchmark/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.benchmark; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.util.concurrent.Executors; import java.util.concurrent.Semaphore; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.proto.BookieClient; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback; import org.apache.bookkeeper.util.MathUtils; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.jboss.netty.buffer.ChannelBuffer; import org.jboss.netty.buffer.ChannelBuffers; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; public class BookieBenchmark extends AbstractBenchmark { static final Logger logger = LoggerFactory.getLogger(BookkeeperBenchmark.class); BookieClient bkc; InetSocketAddress addr; ClientSocketChannelFactory channelFactory; OrderedSafeExecutor executor = new OrderedSafeExecutor(1); public BookieBenchmark(String bookieHostPort) throws Exception { channelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); bkc = new BookieClient(new ClientConfiguration(), channelFactory, executor); String[] hostPort = bookieHostPort.split(":"); addr = new InetSocketAddress(hostPort[0], Integer.parseInt(hostPort[1])); } @Override void doOps(final int numOps) throws Exception { int numOutstanding = Integer.getInteger("nPars",1000); final Semaphore outstanding = new Semaphore(numOutstanding); WriteCallback callback = new WriteCallback() { AbstractCallback handler = new AbstractCallback(outstanding, numOps); @Override public void writeComplete(int rc, long ledgerId, long entryId, InetSocketAddress addr, Object ctx) { handler.handle(rc == BKException.Code.OK, ctx); } }; byte[] passwd = new byte[20]; int size = Integer.getInteger("size", 1024); byte[] data = new byte[size]; for (int i=0; i map = new ConcurrentHashMap(); public static ByteString intern(ByteString in) { ByteString presentValueInMap = map.putIfAbsent(in, in); if (presentValueInMap != null) { return presentValueInMap; } return in; } } ServerConfiguration.java000066400000000000000000000465201244507361200353460ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/common/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.common; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.InputStream; import java.net.InetAddress; import java.net.URL; import java.net.UnknownHostException; import java.util.Arrays; import java.util.LinkedList; import java.util.List; import org.apache.commons.configuration.ConfigurationException; import org.apache.commons.lang.StringUtils; import com.google.protobuf.ByteString; import org.apache.bookkeeper.util.ReflectionUtils; import org.apache.hedwig.conf.AbstractConfiguration; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.util.HedwigSocketAddress; public class ServerConfiguration extends AbstractConfiguration { public final static String REGION = "region"; protected final static String MAX_MESSAGE_SIZE = "max_message_size"; protected final static String READAHEAD_COUNT = "readahead_count"; protected final static String READAHEAD_SIZE = "readahead_size"; protected final static String CACHE_SIZE = "cache_size"; protected final static String CACHE_ENTRY_TTL = "cache_entry_ttl"; protected final static String SCAN_BACKOFF_MSEC = "scan_backoff_ms"; protected final static String SERVER_PORT = "server_port"; protected final static String SSL_SERVER_PORT = "ssl_server_port"; protected final static String ZK_PREFIX = "zk_prefix"; protected final static String ZK_HOST = "zk_host"; protected final static String ZK_TIMEOUT = "zk_timeout"; protected final static String READAHEAD_ENABLED = "readahead_enabled"; protected final static String STANDALONE = "standalone"; protected final static String REGIONS = "regions"; protected final static String CERT_NAME = "cert_name"; protected final static String CERT_PATH = "cert_path"; protected final static String PASSWORD = "password"; protected final static String SSL_ENABLED = "ssl_enabled"; protected final static String CONSUME_INTERVAL = "consume_interval"; protected final static String RETENTION_SECS = "retention_secs"; protected final static String INTER_REGION_SSL_ENABLED = "inter_region_ssl_enabled"; protected final static String MESSAGES_CONSUMED_THREAD_RUN_INTERVAL = "messages_consumed_thread_run_interval"; protected final static String BK_ENSEMBLE_SIZE = "bk_ensemble_size"; @Deprecated protected final static String BK_QUORUM_SIZE = "bk_quorum_size"; protected final static String BK_WRITE_QUORUM_SIZE = "bk_write_quorum_size"; protected final static String BK_ACK_QUORUM_SIZE = "bk_ack_quorum_size"; protected final static String RETRY_REMOTE_SUBSCRIBE_THREAD_RUN_INTERVAL = "retry_remote_subscribe_thread_run_interval"; protected final static String DEFAULT_MESSAGE_WINDOW_SIZE = "default_message_window_size"; protected final static String NUM_READAHEAD_CACHE_THREADS = "num_readahead_cache_threads"; protected final static String MAX_ENTRIES_PER_LEDGER = "max_entries_per_ledger"; // manager related settings protected final static String METADATA_MANAGER_BASED_TOPIC_MANAGER_ENABLED = "metadata_manager_based_topic_manager_enabled"; protected final static String METADATA_MANAGER_FACTORY_CLASS = "metadata_manager_factory_class"; // metastore settings, only being used when METADATA_MANAGER_FACTORY_CLASS is MsMetadataManagerFactory protected final static String METASTORE_IMPL_CLASS = "metastore_impl_class"; protected final static String METASTORE_MAX_ENTRIES_PER_SCAN = "metastoreMaxEntriesPerScan"; private static ClassLoader defaultLoader; static { defaultLoader = Thread.currentThread().getContextClassLoader(); if (null == defaultLoader) { defaultLoader = ServerConfiguration.class.getClassLoader(); } } // these are the derived attributes protected ByteString myRegionByteString = null; protected HedwigSocketAddress myServerAddress = null; protected List regionList = null; // Although this method is not currently used, currently maintaining it like // this so that we can support on-the-fly changes in configuration protected void refreshDerivedAttributes() { refreshMyRegionByteString(); refreshMyServerAddress(); refreshRegionList(); } @Override public void loadConf(URL confURL) throws ConfigurationException { super.loadConf(confURL); refreshDerivedAttributes(); } public int getMaximumMessageSize() { return conf.getInt(MAX_MESSAGE_SIZE, 1258291); /* 1.2M */ } public String getMyRegion() { return conf.getString(REGION, "standalone"); } protected void refreshMyRegionByteString() { myRegionByteString = ByteString.copyFromUtf8(getMyRegion()); } protected void refreshMyServerAddress() { try { // Use the raw IP address as the hostname myServerAddress = new HedwigSocketAddress(InetAddress.getLocalHost().getHostAddress(), getServerPort(), getSSLServerPort()); } catch (UnknownHostException e) { throw new RuntimeException(e); } } // The expected format for the regions parameter is Hostname:Port:SSLPort // with spaces in between each of the regions. protected void refreshRegionList() { String regions = conf.getString(REGIONS, ""); if (regions.isEmpty()) { regionList = new LinkedList(); } else { regionList = Arrays.asList(regions.split(" ")); } } public ByteString getMyRegionByteString() { if (myRegionByteString == null) { refreshMyRegionByteString(); } return myRegionByteString; } /** * Maximum number of messages to read ahead. Default is 10. * * @return int */ public int getReadAheadCount() { return conf.getInt(READAHEAD_COUNT, 10); } /** * Maximum number of bytes to read ahead. Default is 4MB. * * @return long */ public long getReadAheadSizeBytes() { return conf.getLong(READAHEAD_SIZE, 4 * 1024 * 1024); // 4M } /** * Maximum cache size. By default is the smallest of 2G or * half the heap size. * * @return long */ public long getMaximumCacheSize() { // 2G or half of the maximum amount of memory the JVM uses return conf.getLong(CACHE_SIZE, Math.min(2 * 1024L * 1024L * 1024L, Runtime.getRuntime().maxMemory() / 2)); } /** * Cache Entry TTL. By default is 0, cache entry will not be evicted * until the cache is fullfilled or the messages are already consumed. * The TTL is only checked when trying adding a new entry into the cache. * * @return cache entry ttl. */ public long getCacheEntryTTL() { return conf.getLong(CACHE_ENTRY_TTL, 0L); } /** * After a scan of a log fails, how long before we retry (in msec) * * @return long */ public long getScanBackoffPeriodMs() { return conf.getLong(SCAN_BACKOFF_MSEC, 1000); } /** * Returns server port. * * @return int */ public int getServerPort() { return conf.getInt(SERVER_PORT, 4080); } /** * Returns SSL server port. * * @return int */ public int getSSLServerPort() { return conf.getInt(SSL_SERVER_PORT, 9876); } /** * Returns ZooKeeper path prefix. * * @return string */ public String getZkPrefix() { return conf.getString(ZK_PREFIX, "/hedwig"); } public StringBuilder getZkRegionPrefix(StringBuilder sb) { return sb.append(getZkPrefix()).append("/").append(getMyRegion()); } /** * Get znode path to store manager layouts. * * @param sb * StringBuilder to store znode path to store manager layouts. * @return znode path to store manager layouts. */ public StringBuilder getZkManagersPrefix(StringBuilder sb) { return getZkRegionPrefix(sb).append("/managers"); } public StringBuilder getZkTopicsPrefix(StringBuilder sb) { return getZkRegionPrefix(sb).append("/topics"); } public StringBuilder getZkTopicPath(StringBuilder sb, ByteString topic) { return getZkTopicsPrefix(sb).append("/").append(topic.toStringUtf8()); } public StringBuilder getZkHostsPrefix(StringBuilder sb) { return getZkRegionPrefix(sb).append("/hosts"); } public HedwigSocketAddress getServerAddr() { if (myServerAddress == null) { refreshMyServerAddress(); } return myServerAddress; } /** * Return ZooKeeper list of servers. Default is localhost. * * @return String */ public String getZkHost() { List servers = conf.getList(ZK_HOST, null); if (null == servers || 0 == servers.size()) { return "localhost"; } return StringUtils.join(servers, ","); } /** * Return ZooKeeper session timeout. Default is 2s. * * @return int */ public int getZkTimeout() { return conf.getInt(ZK_TIMEOUT, 2000); } /** * Returns true if read-ahead enabled. Default is true. * * @return boolean */ public boolean getReadAheadEnabled() { return conf.getBoolean(READAHEAD_ENABLED, true) || conf.getBoolean("readhead_enabled"); // the key was misspelt in a previous version, so compensate here } /** * Returns true if standalone. Default is false. * * @return boolean */ public boolean isStandalone() { return conf.getBoolean(STANDALONE, false); } /** * Returns list of regions. * * @return List */ public List getRegions() { if (regionList == null) { refreshRegionList(); } return regionList; } /** * Returns the name of the SSL certificate if available as a resource. * * @return String */ public String getCertName() { return conf.getString(CERT_NAME, ""); } /** * This is the path to the SSL certificate if it is available as a file. * * @return String */ public String getCertPath() { return conf.getString(CERT_PATH, ""); } // This method return the SSL certificate as an InputStream based on if it // is configured to be available as a resource or as a file. If nothing is // configured correctly, then a ConfigurationException will be thrown as // we do not know how to obtain the SSL certificate stream. public InputStream getCertStream() throws FileNotFoundException, ConfigurationException { String certName = getCertName(); String certPath = getCertPath(); if (certName != null && !certName.isEmpty()) { return getClass().getResourceAsStream(certName); } else if (certPath != null && !certPath.isEmpty()) { return new FileInputStream(certPath); } else throw new ConfigurationException("SSL Certificate configuration does not have resource name or path set!"); } /** * Returns the password used for BookKeeper ledgers. Default * is the empty string. * * @return */ public String getPassword() { return conf.getString(PASSWORD, ""); } /** * Returns true if SSL is enabled. Default is false. * * @return boolean */ public boolean isSSLEnabled() { return conf.getBoolean(SSL_ENABLED, false); } /** * Gets the number of messages consumed before persisting * information about consumed messages. A value greater than * one avoids persisting information about consumed messages * upon every consumed message. Default is 50. * * @return int */ public int getConsumeInterval() { return conf.getInt(CONSUME_INTERVAL, 50); } /** * Returns the interval to release a topic. If this * parameter is greater than zero, then schedule a * task to release an owned topic. Default is 0 (never released). * * @return int */ public int getRetentionSecs() { return conf.getInt(RETENTION_SECS, 0); } /** * True if SSL is enabled across regions. * * @return boolean */ public boolean isInterRegionSSLEnabled() { return conf.getBoolean(INTER_REGION_SSL_ENABLED, false); } /** * This parameter is used to determine how often we run the * SubscriptionManager's Messages Consumed timer task thread * (in milliseconds). * * @return int */ public int getMessagesConsumedThreadRunInterval() { return conf.getInt(MESSAGES_CONSUMED_THREAD_RUN_INTERVAL, 60000); } /** * This parameter is used to determine how often we run a thread * to retry those failed remote subscriptions in asynchronous mode * (in milliseconds). * * @return int */ public int getRetryRemoteSubscribeThreadRunInterval() { return conf.getInt(RETRY_REMOTE_SUBSCRIBE_THREAD_RUN_INTERVAL, 120000); } /** * This parameter is for setting the default maximum number of messages which * can be delivered to a subscriber without being consumed. * we pause messages delivery to a subscriber when reaching the window size * * @return int */ public int getDefaultMessageWindowSize() { return conf.getInt(DEFAULT_MESSAGE_WINDOW_SIZE, 0); } /** * This parameter is used when Bookkeeper is the persistence * store and indicates what the ensemble size is (i.e. how * many bookie servers to stripe the ledger entries across). * * @return int */ public int getBkEnsembleSize() { return conf.getInt(BK_ENSEMBLE_SIZE, 3); } /** * This parameter is used when Bookkeeper is the persistence store * and indicates what the quorum size is (i.e. how many redundant * copies of each ledger entry is written). * * @return int */ @Deprecated protected int getBkQuorumSize() { return conf.getInt(BK_QUORUM_SIZE, 2); } /** * Get the write quorum size for BookKeeper client, which is used to * indicate how many redundant copies of each ledger entry is written. * * @return write quorum size for BookKeeper client. */ public int getBkWriteQuorumSize() { if (conf.containsKey(BK_WRITE_QUORUM_SIZE)) { return conf.getInt(BK_WRITE_QUORUM_SIZE, 2); } else { return getBkQuorumSize(); } } /** * Get the ack quorum size for BookKeeper client. * * @return ack quorum size for BookKeeper client. */ public int getBkAckQuorumSize() { if (conf.containsKey(BK_ACK_QUORUM_SIZE)) { return conf.getInt(BK_ACK_QUORUM_SIZE, 2); } else { return getBkQuorumSize(); } } /** * This parameter is used when BookKeeper is the persistence storage, * and indicates when the number of entries stored in a ledger reach * the threshold, hub server will open a new ledger to write. * * @return max entries per ledger */ public long getMaxEntriesPerLedger() { return conf.getLong(MAX_ENTRIES_PER_LEDGER, 0L); } /* * Is this a valid configuration that we can run with? This code might grow * over time. */ public void validate() throws ConfigurationException { if (!getZkPrefix().startsWith("/")) { throw new ConfigurationException(ZK_PREFIX + " must start with a /"); } // Validate that if Regions exist and inter-region communication is SSL // enabled, that the Regions correspond to valid HedwigSocketAddresses, // namely that SSL ports are present. if (isInterRegionSSLEnabled() && getRegions().size() > 0) { for (String hubString : getRegions()) { HedwigSocketAddress hub = new HedwigSocketAddress(hubString); if (hub.getSSLSocketAddress() == null) throw new ConfigurationException("Region defined does not have required SSL port: " + hubString); } } // Validate that the Bookkeeper ensemble size >= quorum size. if (getBkEnsembleSize() < getBkWriteQuorumSize()) { throw new ConfigurationException("BK ensemble size (" + getBkEnsembleSize() + ") is less than the write quorum size (" + getBkWriteQuorumSize() + ")"); } if (getBkWriteQuorumSize() < getBkAckQuorumSize()) { throw new ConfigurationException("BK write quorum size (" + getBkWriteQuorumSize() + ") is less than the ack quorum size (" + getBkAckQuorumSize() + ")"); } // add other checks here } /** * Get number of read ahead cache threads. * * @return number of read ahead cache threads. */ public int getNumReadAheadCacheThreads() { return conf.getInt(NUM_READAHEAD_CACHE_THREADS, Runtime.getRuntime().availableProcessors()); } /** * Whether enable metadata manager based topic manager. * * @return true if enabled metadata manager based topic manager. */ public boolean isMetadataManagerBasedTopicManagerEnabled() { return conf.getBoolean(METADATA_MANAGER_BASED_TOPIC_MANAGER_ENABLED, false); } /** * Get metadata manager factory class. * * @return manager class */ public Class getMetadataManagerFactoryClass() throws ConfigurationException { return ReflectionUtils.getClass(conf, METADATA_MANAGER_FACTORY_CLASS, null, MetadataManagerFactory.class, defaultLoader); } /** * Set metadata manager factory class name * * @param managerClsName * Manager Class Name * @return server configuration */ public ServerConfiguration setMetadataManagerFactoryName(String managerClsName) { conf.setProperty(METADATA_MANAGER_FACTORY_CLASS, managerClsName); return this; } /** * Get metastore implementation class. * * @return metastore implementation class name. */ public String getMetastoreImplClass() { return conf.getString(METASTORE_IMPL_CLASS); } /** * Get max entries per scan in metastore. * * @return max entries per scan in metastore. */ public int getMetastoreMaxEntriesPerScan() { return conf.getInt(METASTORE_MAX_ENTRIES_PER_SCAN, 50); } } TerminateJVMExceptionHandler.java000066400000000000000000000023731244507361200370300ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/common/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.common; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TerminateJVMExceptionHandler implements Thread.UncaughtExceptionHandler { static Logger logger = LoggerFactory.getLogger(TerminateJVMExceptionHandler.class); @Override public void uncaughtException(Thread t, Throwable e) { logger.error("Uncaught exception in thread " + t.getName(), e); Runtime.getRuntime().exit(1); } } TopicOpQueuer.java000066400000000000000000000067161244507361200341170ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/common/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.common; import java.util.HashMap; import java.util.LinkedList; import java.util.Queue; import java.util.concurrent.ScheduledExecutorService; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.util.Callback; public class TopicOpQueuer { /** * Map from topic to the queue of operations for that topic. */ protected HashMap> topic2ops = new HashMap>(); protected final ScheduledExecutorService scheduler; public TopicOpQueuer(ScheduledExecutorService scheduler) { this.scheduler = scheduler; } public interface Op extends Runnable { } public abstract class AsynchronousOp implements Op { final public ByteString topic; final public Callback cb; final public Object ctx; public AsynchronousOp(final ByteString topic, final Callback cb, Object ctx) { this.topic = topic; this.cb = new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); popAndRunNext(topic); } @Override public void operationFinished(Object ctx, T resultOfOperation) { cb.operationFinished(ctx, resultOfOperation); popAndRunNext(topic); } }; this.ctx = ctx; } } public abstract class SynchronousOp implements Op { final public ByteString topic; public SynchronousOp(ByteString topic) { this.topic = topic; } @Override public final void run() { runInternal(); popAndRunNext(topic); } protected abstract void runInternal(); } protected synchronized void popAndRunNext(ByteString topic) { Queue ops = topic2ops.get(topic); if (!ops.isEmpty()) ops.remove(); if (!ops.isEmpty()) scheduler.submit(ops.peek()); } public void pushAndMaybeRun(ByteString topic, Op op) { int size; synchronized (this) { Queue ops = topic2ops.get(topic); if (ops == null) { ops = new LinkedList(); topic2ops.put(topic, ops); } ops.add(op); size = ops.size(); } if (size == 1) op.run(); } public Runnable peek(ByteString topic) { return topic2ops.get(topic).peek(); } } UnexpectedError.java000066400000000000000000000021351244507361200344600ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/common/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.common; public class UnexpectedError extends Error { /** * */ private static final long serialVersionUID = 1L; public UnexpectedError(String msg) { super(msg); } public UnexpectedError(Throwable cause) { super(cause); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/delivery/000077500000000000000000000000001244507361200311105ustar00rootroot00000000000000ChannelEndPoint.java000066400000000000000000000053061244507361200347110ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/delivery/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.delivery; import java.util.HashMap; import java.util.Map; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.server.common.UnexpectedError; public class ChannelEndPoint implements DeliveryEndPoint, ChannelFutureListener { Channel channel; public Channel getChannel() { return channel; } Map callbacks = new HashMap(); public ChannelEndPoint(Channel channel) { this.channel = channel; } public void close() { channel.close(); } public void send(PubSubResponse response, DeliveryCallback callback) { ChannelFuture future = channel.write(response); callbacks.put(future, callback); future.addListener(this); } public void operationComplete(ChannelFuture future) throws Exception { DeliveryCallback callback = callbacks.get(future); callbacks.remove(future); if (callback == null) { throw new UnexpectedError("Could not locate callback for channel future"); } if (future.isSuccess()) { callback.sendingFinished(); } else { // treat all channel errors as permanent callback.permanentErrorOnSend(); } } @Override public boolean equals(Object obj) { if (obj instanceof ChannelEndPoint) { ChannelEndPoint channelEndPoint = (ChannelEndPoint) obj; return channel.equals(channelEndPoint.channel); } else { return false; } } @Override public int hashCode() { return channel.hashCode(); } @Override public String toString() { return channel.toString(); } } DeliveryCallback.java000066400000000000000000000017571244507361200351060ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/delivery/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.delivery; public interface DeliveryCallback { public void sendingFinished(); public void transientErrorOnSend(); public void permanentErrorOnSend(); } DeliveryEndPoint.java000066400000000000000000000020411244507361200351150ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/delivery/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.delivery; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; public interface DeliveryEndPoint { public void send(PubSubResponse response, DeliveryCallback callback); public void close(); } DeliveryManager.java000066400000000000000000000064601244507361200347600ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/delivery/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.delivery; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.filter.ServerMessageFilter; import org.apache.hedwig.util.Callback; public interface DeliveryManager { public void start(); /** * Start serving a given subscription. * * @param topic * Topic Name * @param subscriberId * Subscriber Id * @param preferences * Subscription Preferences * @param seqIdToStartFrom * Message sequence id starting delivery from. * @param endPoint * End point to deliver messages to. * @param filter * Message filter used to filter messages before delivery. * @param callback * Callback instance. * @param ctx * Callback context. */ public void startServingSubscription(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences, MessageSeqId seqIdToStartFrom, DeliveryEndPoint endPoint, ServerMessageFilter filter, Callback callback, Object ctx); /** * Stop serving a given subscription. * * @param topic * Topic Name * @param subscriberId * Subscriber Id * @param event * Subscription event indicating the reason to stop the subscriber. * @param callback * Callback instance. * @param ctx * Callback context. */ public void stopServingSubscriber(ByteString topic, ByteString subscriberId, SubscriptionEvent event, Callback callback, Object ctx); /** * Tell the delivery manager where that a subscriber has consumed * * @param topic * Topic Name * @param subscriberId * Subscriber Id * @param consumedSeqId * Max consumed seq id. */ public void messageConsumed(ByteString topic, ByteString subscriberId, MessageSeqId consumedSeqId); /** * Stop delivery manager */ public void stop(); } FIFODeliveryManager.java000066400000000000000000001047221244507361200354240ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/delivery/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.delivery; import java.util.Comparator; import java.util.HashMap; import java.util.HashSet; import java.util.Map; import java.util.Queue; import java.util.Set; import java.util.SortedMap; import java.util.TreeMap; import java.util.concurrent.BlockingQueue; import java.util.concurrent.PriorityBlockingQueue; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.TimeUnit; import java.util.concurrent.locks.ReentrantReadWriteLock; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.common.annotations.VisibleForTesting; import com.google.protobuf.ByteString; import org.apache.bookkeeper.util.MathUtils; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.filter.ServerMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.ProtocolVersion; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.common.UnexpectedError; import org.apache.hedwig.server.handlers.SubscriptionChannelManager.SubChannelDisconnectedListener; import org.apache.hedwig.server.netty.ServerStats; import org.apache.hedwig.server.persistence.CancelScanRequest; import org.apache.hedwig.server.persistence.Factory; import org.apache.hedwig.server.persistence.MapMethods; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.persistence.ReadAheadCache; import org.apache.hedwig.server.persistence.ScanCallback; import org.apache.hedwig.server.persistence.ScanRequest; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; public class FIFODeliveryManager implements Runnable, DeliveryManager, SubChannelDisconnectedListener { protected static final Logger logger = LoggerFactory.getLogger(FIFODeliveryManager.class); private static Callback NOP_CALLBACK = new Callback() { @Override public void operationFinished(Object ctx, Void result) { } @Override public void operationFailed(Object ctx, PubSubException exception) { } }; protected interface DeliveryManagerRequest { public void performRequest(); } /** * the main queue that the single-threaded delivery manager works off of */ BlockingQueue requestQueue = new LinkedBlockingQueue(); /** * The queue of all subscriptions that are facing a transient error either * in scanning from the persistence manager, or in sending to the consumer */ Queue retryQueue = new PriorityBlockingQueue(32, new Comparator() { @Override public int compare(ActiveSubscriberState as1, ActiveSubscriberState as2) { long s = as1.lastScanErrorTime - as2.lastScanErrorTime; return s > 0 ? 1 : (s < 0 ? -1 : 0); } }); /** * Stores a mapping from topic to the delivery pointers on the topic. The * delivery pointers are stored in a sorted map from seq-id to the set of * subscribers at that seq-id */ Map>> perTopicDeliveryPtrs; /** * Mapping from delivery end point to the subscriber state that we are * serving at that end point. This prevents us e.g., from serving two * subscriptions to the same endpoint */ Map subscriberStates; private final ReadAheadCache cache; private final PersistenceManager persistenceMgr; private ServerConfiguration cfg; // Boolean indicating if this thread should continue running. This is used // when we want to stop the thread during a PubSubServer shutdown. protected boolean keepRunning = true; private final Thread workerThread; private Object suspensionLock = new Object(); private boolean suspended = false; public FIFODeliveryManager(PersistenceManager persistenceMgr, ServerConfiguration cfg) { this.persistenceMgr = persistenceMgr; if (persistenceMgr instanceof ReadAheadCache) { this.cache = (ReadAheadCache) persistenceMgr; } else { this.cache = null; } perTopicDeliveryPtrs = new HashMap>>(); subscriberStates = new HashMap(); workerThread = new Thread(this, "DeliveryManagerThread"); this.cfg = cfg; } @Override public void start() { workerThread.start(); } /** * Stop FIFO delivery manager from processing requests. (for testing) */ @VisibleForTesting public void suspendProcessing() { synchronized(suspensionLock) { suspended = true; } } /** * Resume FIFO delivery manager. (for testing) */ @VisibleForTesting public void resumeProcessing() { synchronized(suspensionLock) { suspended = false; suspensionLock.notify(); } } /** * ===================================================================== Our * usual enqueue function, stop if error because of unbounded queue, should * never happen * */ protected void enqueueWithoutFailure(DeliveryManagerRequest request) { if (!requestQueue.offer(request)) { throw new UnexpectedError("Could not enqueue object: " + request + " to delivery manager request queue."); } } /** * ==================================================================== * Public interface of the delivery manager */ /** * Tells the delivery manager to start sending out messages for a particular * subscription * * @param topic * @param subscriberId * @param seqIdToStartFrom * Message sequence-id from where delivery should be started * @param endPoint * The delivery end point to which send messages to * @param filter * Only messages passing this filter should be sent to this * subscriber * @param callback * Callback instance * @param ctx * Callback context */ @Override public void startServingSubscription(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences, MessageSeqId seqIdToStartFrom, DeliveryEndPoint endPoint, ServerMessageFilter filter, Callback callback, Object ctx) { ActiveSubscriberState subscriber = new ActiveSubscriberState(topic, subscriberId, preferences, seqIdToStartFrom.getLocalComponent() - 1, endPoint, filter, callback, ctx); enqueueWithoutFailure(subscriber); } public void stopServingSubscriber(ByteString topic, ByteString subscriberId, SubscriptionEvent event, Callback cb, Object ctx) { enqueueWithoutFailure(new StopServingSubscriber(topic, subscriberId, event, cb, ctx)); } /** * Instructs the delivery manager to backoff on the given subscriber and * retry sending after some time * * @param subscriber */ public void retryErroredSubscriberAfterDelay(ActiveSubscriberState subscriber) { subscriber.setLastScanErrorTime(MathUtils.now()); if (!retryQueue.offer(subscriber)) { throw new UnexpectedError("Could not enqueue to delivery manager retry queue"); } } public void clearRetryDelayForSubscriber(ActiveSubscriberState subscriber) { subscriber.clearLastScanErrorTime(); if (!retryQueue.offer(subscriber)) { throw new UnexpectedError("Could not enqueue to delivery manager retry queue"); } // no request in request queue now // issue a empty delivery request to not waiting for polling requests queue if (requestQueue.isEmpty()) { enqueueWithoutFailure(new DeliveryManagerRequest() { @Override public void performRequest() { // do nothing } }); } } // TODO: for now, I don't move messageConsumed request to delivery manager thread, // which is supposed to be fixed in {@link https://issues.apache.org/jira/browse/BOOKKEEPER-503} @Override public void messageConsumed(ByteString topic, ByteString subscriberId, MessageSeqId consumedSeqId) { ActiveSubscriberState subState = subscriberStates.get(new TopicSubscriber(topic, subscriberId)); if (null == subState) { return; } subState.messageConsumed(consumedSeqId.getLocalComponent()); } /** * Instructs the delivery manager to move the delivery pointer for a given * subscriber * * @param subscriber * @param prevSeqId * @param newSeqId */ public void moveDeliveryPtrForward(ActiveSubscriberState subscriber, long prevSeqId, long newSeqId) { enqueueWithoutFailure(new DeliveryPtrMove(subscriber, prevSeqId, newSeqId)); } /* * ========================================================================== * == End of public interface, internal machinery begins. */ public void run() { while (keepRunning) { DeliveryManagerRequest request = null; try { // We use a timeout of 1 second, so that we can wake up once in // a while to check if there is something in the retry queue. request = requestQueue.poll(1, TimeUnit.SECONDS); synchronized(suspensionLock) { while (suspended) { suspensionLock.wait(); } } } catch (InterruptedException e) { Thread.currentThread().interrupt(); } // First retry any subscriptions that had failed and need a retry retryErroredSubscribers(); if (request == null) { continue; } request.performRequest(); } } /** * Stop method which will enqueue a ShutdownDeliveryManagerRequest. */ @Override public void stop() { enqueueWithoutFailure(new ShutdownDeliveryManagerRequest()); } protected void retryErroredSubscribers() { long lastInterestingFailureTime = MathUtils.now() - cfg.getScanBackoffPeriodMs(); ActiveSubscriberState subscriber; while ((subscriber = retryQueue.peek()) != null) { if (subscriber.getLastScanErrorTime() > lastInterestingFailureTime) { // Not enough time has elapsed yet, will retry later // Since the queue is fifo, no need to check later items return; } // retry now subscriber.deliverNextMessage(); retryQueue.poll(); } } protected void removeDeliveryPtr(ActiveSubscriberState subscriber, Long seqId, boolean isAbsenceOk, boolean pruneTopic) { assert seqId != null; // remove this subscriber from the delivery pointers data structure ByteString topic = subscriber.getTopic(); SortedMap> deliveryPtrs = perTopicDeliveryPtrs.get(topic); if (deliveryPtrs == null && !isAbsenceOk) { throw new UnexpectedError("No delivery pointers found while disconnecting " + "channel for topic:" + topic); } if(null == deliveryPtrs) { return; } if (!MapMethods.removeFromMultiMap(deliveryPtrs, seqId, subscriber) && !isAbsenceOk) { throw new UnexpectedError("Could not find subscriber:" + subscriber + " at the expected delivery pointer"); } if (pruneTopic && deliveryPtrs.isEmpty()) { perTopicDeliveryPtrs.remove(topic); } } protected long getMinimumSeqId(ByteString topic) { SortedMap> deliveryPtrs = perTopicDeliveryPtrs.get(topic); if (deliveryPtrs == null || deliveryPtrs.isEmpty()) { return Long.MAX_VALUE - 1; } return deliveryPtrs.firstKey(); } protected void addDeliveryPtr(ActiveSubscriberState subscriber, Long seqId) { // If this topic doesn't exist in the per-topic delivery pointers table, // create an entry for it SortedMap> deliveryPtrs = MapMethods.getAfterInsertingIfAbsent( perTopicDeliveryPtrs, subscriber.getTopic(), TreeMapLongToSetSubscriberFactory.instance); MapMethods.addToMultiMap(deliveryPtrs, seqId, subscriber, HashMapSubscriberFactory.instance); } public class ActiveSubscriberState implements ScanCallback, DeliveryCallback, DeliveryManagerRequest, CancelScanRequest { static final int UNLIMITED = 0; ByteString topic; ByteString subscriberId; long lastLocalSeqIdDelivered; boolean connected = true; ReentrantReadWriteLock connectedLock = new ReentrantReadWriteLock(); DeliveryEndPoint deliveryEndPoint; long lastScanErrorTime = -1; long localSeqIdDeliveringNow; long lastSeqIdCommunicatedExternally; long lastSeqIdConsumedUtil; boolean isThrottled = false; final int messageWindowSize; ServerMessageFilter filter; Callback cb; Object ctx; // track the outstanding scan request // so we could cancel it ScanRequest outstandingScanRequest; final static int SEQ_ID_SLACK = 10; public ActiveSubscriberState(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences, long lastLocalSeqIdDelivered, DeliveryEndPoint deliveryEndPoint, ServerMessageFilter filter, Callback cb, Object ctx) { this.topic = topic; this.subscriberId = subscriberId; this.lastLocalSeqIdDelivered = lastLocalSeqIdDelivered; this.lastSeqIdConsumedUtil = lastLocalSeqIdDelivered; this.deliveryEndPoint = deliveryEndPoint; this.filter = filter; if (preferences.hasMessageWindowSize()) { messageWindowSize = preferences.getMessageWindowSize(); } else { if (FIFODeliveryManager.this.cfg.getDefaultMessageWindowSize() > 0) { messageWindowSize = FIFODeliveryManager.this.cfg.getDefaultMessageWindowSize(); } else { messageWindowSize = UNLIMITED; } } this.cb = cb; this.ctx = ctx; } public void setNotConnected(SubscriptionEvent event) { this.connectedLock.writeLock().lock(); try { // have closed it. if (!connected) { return; } this.connected = false; // put itself in ReadAhead queue to cancel outstanding scan request // if outstanding scan request callback before cancel op executed, // nothing it would cancel. if (null != cache && null != outstandingScanRequest) { cache.cancelScanRequest(topic, this); } } finally { this.connectedLock.writeLock().unlock(); } if (null != event && (SubscriptionEvent.TOPIC_MOVED == event || SubscriptionEvent.SUBSCRIPTION_FORCED_CLOSED == event)) { // we should not close the channel now after enabling multiplexing PubSubResponse response = PubSubResponseUtils.getResponseForSubscriptionEvent( topic, subscriberId, event ); deliveryEndPoint.send(response, new DeliveryCallback() { @Override public void sendingFinished() { // do nothing now } @Override public void transientErrorOnSend() { // do nothing now } @Override public void permanentErrorOnSend() { // if channel is broken, close the channel deliveryEndPoint.close(); } }); } // uninitialize filter this.filter.uninitialize(); } public ByteString getTopic() { return topic; } public synchronized long getLastScanErrorTime() { return lastScanErrorTime; } public synchronized void setLastScanErrorTime(long lastScanErrorTime) { this.lastScanErrorTime = lastScanErrorTime; } /** * Clear the last scan error time so it could be retry immediately. */ protected synchronized void clearLastScanErrorTime() { this.lastScanErrorTime = -1; } protected boolean isConnected() { connectedLock.readLock().lock(); try { return connected; } finally { connectedLock.readLock().unlock(); } } protected synchronized void messageConsumed(long newSeqIdConsumed) { if (newSeqIdConsumed <= lastSeqIdConsumedUtil) { return; } if (logger.isDebugEnabled()) { logger.debug("Subscriber ({}) moved consumed ptr from {} to {}.", va(this, lastSeqIdConsumedUtil, newSeqIdConsumed)); } lastSeqIdConsumedUtil = newSeqIdConsumed; // after updated seq id check whether it still exceed msg limitation if (msgLimitExceeded()) { return; } if (isThrottled) { isThrottled = false; logger.info("Try to wake up subscriber ({}) to deliver messages again : last delivered {}, last consumed {}.", va(this, lastLocalSeqIdDelivered, lastSeqIdConsumedUtil)); enqueueWithoutFailure(new DeliveryManagerRequest() { @Override public void performRequest() { // enqueue clearRetryDelayForSubscriber(ActiveSubscriberState.this); } }); } } protected boolean msgLimitExceeded() { if (messageWindowSize == UNLIMITED) { return false; } if (lastLocalSeqIdDelivered - lastSeqIdConsumedUtil >= messageWindowSize) { return true; } return false; } public void deliverNextMessage() { connectedLock.readLock().lock(); try { doDeliverNextMessage(); } finally { connectedLock.readLock().unlock(); } } private void doDeliverNextMessage() { if (!connected) { return; } synchronized (this) { // check whether we have delivered enough messages without receiving their consumes if (msgLimitExceeded()) { logger.info("Subscriber ({}) is throttled : last delivered {}, last consumed {}.", va(this, lastLocalSeqIdDelivered, lastSeqIdConsumedUtil)); isThrottled = true; // do nothing, since the delivery process would be throttled. // After message consumed, it would be added back to retry queue. return; } localSeqIdDeliveringNow = persistenceMgr.getSeqIdAfterSkipping(topic, lastLocalSeqIdDelivered, 1); outstandingScanRequest = new ScanRequest(topic, localSeqIdDeliveringNow, /* callback= */this, /* ctx= */null); } persistenceMgr.scanSingleMessage(outstandingScanRequest); } /** * =============================================================== * {@link CancelScanRequest} methods * * This method runs ins same threads with ScanCallback. When it runs, * it checked whether it is outstanding scan request. if there is one, * cancel it. */ @Override public ScanRequest getScanRequest() { // no race between cancel request and scan callback // the only race is between stopServing and deliverNextMessage // deliverNextMessage would be executed in netty callback which is in netty thread // stopServing is run in delivery thread. if stopServing runs before deliverNextMessage // deliverNextMessage would have chance to put a stub in ReadAheadCache // then we don't have any chance to cancel it. // use connectedLock to avoid such race. return outstandingScanRequest; } private boolean checkConnected() { connectedLock.readLock().lock(); try { // message scanned means the outstanding request is executed outstandingScanRequest = null; return connected; } finally { connectedLock.readLock().unlock(); } } /** * =============================================================== * {@link ScanCallback} methods */ public void messageScanned(Object ctx, Message message) { if (!checkConnected()) { return; } if (!filter.testMessage(message)) { // for filtered out messages, we don't deliver the message to client, so we would not // receive its consume request which moves the lastSeqIdConsumedUtil pointer. // we move the lastSeqIdConsumedUtil here for filtered out messages, which would // avoid a subscriber being throttled due to the message gap introduced by filtering. // // it is OK to move lastSeqIdConsumedUtil here, since this pointer is subscriber's // delivery state which to trottling deliver. changing lastSeqIdConsumedUtil would // not affect the subscriber's consume pointer in zookeeper which is managed in subscription // manager. // // And marking message consumed before calling sending finished, would avoid the subscriber // being throttled first and released from throttled state laster. messageConsumed(message.getMsgId().getLocalComponent()); sendingFinished(); return; } /** * The method below will invoke our sendingFinished() method when * done */ PubSubResponse response = PubSubResponse.newBuilder() .setProtocolVersion(ProtocolVersion.VERSION_ONE) .setStatusCode(StatusCode.SUCCESS).setTxnId(0) .setMessage(message).setTopic(topic) .setSubscriberId(subscriberId).build(); deliveryEndPoint.send(response, // // callback = this); } @Override public void scanFailed(Object ctx, Exception exception) { if (!checkConnected()) { return; } // wait for some time and then retry retryErroredSubscriberAfterDelay(this); } @Override public void scanFinished(Object ctx, ReasonForFinish reason) { checkConnected(); } /** * =============================================================== * {@link DeliveryCallback} methods */ @Override public void sendingFinished() { if (!isConnected()) { return; } synchronized (this) { lastLocalSeqIdDelivered = localSeqIdDeliveringNow; if (lastLocalSeqIdDelivered > lastSeqIdCommunicatedExternally + SEQ_ID_SLACK) { // Note: The order of the next 2 statements is important. We should // submit a request to change our delivery pointer only *after* we // have actually changed it. Otherwise, there is a race condition // with removal of this channel, w.r.t, maintaining the deliveryPtrs // tree map. long prevId = lastSeqIdCommunicatedExternally; lastSeqIdCommunicatedExternally = lastLocalSeqIdDelivered; moveDeliveryPtrForward(this, prevId, lastLocalSeqIdDelivered); } } // increment deliveried message ServerStats.getInstance().incrementMessagesDelivered(); deliverNextMessage(); } public synchronized long getLastSeqIdCommunicatedExternally() { return lastSeqIdCommunicatedExternally; } @Override public void permanentErrorOnSend() { // the underlying channel is broken, the channel will // be closed in UmbrellaHandler when exception happened. // so we don't need to close the channel again stopServingSubscriber(topic, subscriberId, null, NOP_CALLBACK, null); } @Override public void transientErrorOnSend() { retryErroredSubscriberAfterDelay(this); } /** * =============================================================== * {@link DeliveryManagerRequest} methods */ @Override public void performRequest() { // Put this subscriber in the channel to subscriber mapping ActiveSubscriberState prevSubscriber = subscriberStates.put(new TopicSubscriber(topic, subscriberId), this); // after put the active subscriber in subscriber states mapping // trigger the callback to tell it started to deliver the message // should let subscriber response go first before first delivered message. cb.operationFinished(ctx, (Void)null); if (prevSubscriber != null) { // we already in the delivery thread, we don't need to equeue a stop request // just stop it now, since stop is not blocking operation. // and also it cleans the old state of the active subscriber immediately. SubscriptionEvent se; if (deliveryEndPoint.equals(prevSubscriber.deliveryEndPoint)) { logger.debug("Subscriber {} replaced a duplicated subscriber {} at same delivery point {}.", va(this, prevSubscriber, deliveryEndPoint)); se = null; } else { logger.debug("Subscriber {} from delivery point {} forcelly closed delivery point {}.", va(this, deliveryEndPoint, prevSubscriber.deliveryEndPoint)); se = SubscriptionEvent.SUBSCRIPTION_FORCED_CLOSED; } doStopServingSubscriber(prevSubscriber, se); } synchronized (this) { lastSeqIdCommunicatedExternally = lastLocalSeqIdDelivered; addDeliveryPtr(this, lastLocalSeqIdDelivered); } deliverNextMessage(); }; @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("Topic: "); sb.append(topic.toStringUtf8()); sb.append("Subscriber: "); sb.append(subscriberId.toStringUtf8()); sb.append(", DeliveryPtr: "); sb.append(lastLocalSeqIdDelivered); return sb.toString(); } } protected class StopServingSubscriber implements DeliveryManagerRequest { TopicSubscriber ts; SubscriptionEvent event; final Callback cb; final Object ctx; public StopServingSubscriber(ByteString topic, ByteString subscriberId, SubscriptionEvent event, Callback callback, Object ctx) { this.ts = new TopicSubscriber(topic, subscriberId); this.event = event; this.cb = callback; this.ctx = ctx; } @Override public void performRequest() { ActiveSubscriberState subscriber = subscriberStates.remove(ts); if (null != subscriber) { doStopServingSubscriber(subscriber, event); } cb.operationFinished(ctx, null); } } /** * Stop serving a subscriber. This method should be called in a * {@link DeliveryManagerRequest}. * * @param subscriber * Active Subscriber to stop * @param event * Subscription Event for the stop reason */ private void doStopServingSubscriber(ActiveSubscriberState subscriber, SubscriptionEvent event) { // This will automatically stop delivery, and disconnect the channel subscriber.setNotConnected(event); // if the subscriber has moved on, a move request for its delivery // pointer must be pending in the request queue. Note that the // subscriber first changes its delivery pointer and then submits a // request to move so this works. removeDeliveryPtr(subscriber, subscriber.getLastSeqIdCommunicatedExternally(), // // isAbsenceOk= true, // pruneTopic= true); } protected class DeliveryPtrMove implements DeliveryManagerRequest { ActiveSubscriberState subscriber; Long oldSeqId; Long newSeqId; public DeliveryPtrMove(ActiveSubscriberState subscriber, Long oldSeqId, Long newSeqId) { this.subscriber = subscriber; this.oldSeqId = oldSeqId; this.newSeqId = newSeqId; } @Override public void performRequest() { ByteString topic = subscriber.getTopic(); long prevMinSeqId = getMinimumSeqId(topic); if (subscriber.isConnected()) { removeDeliveryPtr(subscriber, oldSeqId, // // isAbsenceOk= false, // pruneTopic= false); addDeliveryPtr(subscriber, newSeqId); } else { removeDeliveryPtr(subscriber, oldSeqId, // // isAbsenceOk= true, // pruneTopic= true); } long nowMinSeqId = getMinimumSeqId(topic); if (nowMinSeqId > prevMinSeqId) { persistenceMgr.deliveredUntil(topic, nowMinSeqId); } } } protected class ShutdownDeliveryManagerRequest implements DeliveryManagerRequest { // This is a simple type of Request we will enqueue when the // PubSubServer is shut down and we want to stop the DeliveryManager // thread. @Override public void performRequest() { keepRunning = false; } } /** * ==================================================================== * * Dumb factories for our map methods */ protected static class TreeMapLongToSetSubscriberFactory implements Factory>> { static TreeMapLongToSetSubscriberFactory instance = new TreeMapLongToSetSubscriberFactory(); @Override public SortedMap> newInstance() { return new TreeMap>(); } } protected static class HashMapSubscriberFactory implements Factory> { static HashMapSubscriberFactory instance = new HashMapSubscriberFactory(); @Override public Set newInstance() { return new HashSet(); } } @Override public void onSubChannelDisconnected(TopicSubscriber topicSubscriber) { stopServingSubscriber(topicSubscriber.getTopic(), topicSubscriber.getSubscriberId(), null, NOP_CALLBACK, null); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/000077500000000000000000000000001244507361200310655ustar00rootroot00000000000000BaseHandler.java000066400000000000000000000054671244507361200340350ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.jboss.netty.channel.Channel; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ServerNotResponsibleForTopicException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.netty.ServerStats; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.HedwigSocketAddress; public abstract class BaseHandler implements Handler { protected TopicManager topicMgr; protected ServerConfiguration cfg; protected BaseHandler(TopicManager tm, ServerConfiguration cfg) { this.topicMgr = tm; this.cfg = cfg; } public void handleRequest(final PubSubRequest request, final Channel channel) { topicMgr.getOwner(request.getTopic(), request.getShouldClaim(), new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); ServerStats.getInstance().getOpStats(request.getType()).incrementFailedOps(); } @Override public void operationFinished(Object ctx, HedwigSocketAddress owner) { if (!owner.equals(cfg.getServerAddr())) { channel.write(PubSubResponseUtils.getResponseForException( new ServerNotResponsibleForTopicException(owner.toString()), request.getTxnId())); ServerStats.getInstance().incrementRequestsRedirect(); return; } handleRequestAtOwner(request, channel); } }, null); } public abstract void handleRequestAtOwner(PubSubRequest request, Channel channel); } ChannelDisconnectListener.java000066400000000000000000000020761244507361200367460ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.jboss.netty.channel.Channel; public interface ChannelDisconnectListener { /** * Act on a particular channel being disconnected * @param channel */ public void channelDisconnected(Channel channel); } CloseSubscriptionHandler.java000066400000000000000000000116401244507361200366230ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFutureListener; import com.google.protobuf.ByteString; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.CloseSubscriptionRequest; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.netty.ServerStats; import org.apache.hedwig.server.netty.ServerStats.OpStats; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.server.subscriptions.SubscriptionManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; public class CloseSubscriptionHandler extends BaseHandler { SubscriptionManager subMgr; DeliveryManager deliveryMgr; SubscriptionChannelManager subChannelMgr; // op stats final OpStats closesubStats; public CloseSubscriptionHandler(ServerConfiguration cfg, TopicManager tm, SubscriptionManager subMgr, DeliveryManager deliveryMgr, SubscriptionChannelManager subChannelMgr) { super(tm, cfg); this.subMgr = subMgr; this.deliveryMgr = deliveryMgr; this.subChannelMgr = subChannelMgr; closesubStats = ServerStats.getInstance().getOpStats(OperationType.CLOSESUBSCRIPTION); } @Override public void handleRequestAtOwner(final PubSubRequest request, final Channel channel) { if (!request.hasCloseSubscriptionRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing closesubscription request data"); closesubStats.incrementFailedOps(); return; } final CloseSubscriptionRequest closesubRequest = request.getCloseSubscriptionRequest(); final ByteString topic = request.getTopic(); final ByteString subscriberId = closesubRequest.getSubscriberId(); final long requestTime = System.currentTimeMillis(); subMgr.closeSubscription(topic, subscriberId, new Callback() { @Override public void operationFinished(Object ctx, Void result) { // we should not close the channel in delivery manager // since client waits the response for closeSubscription request // client side would close the channel deliveryMgr.stopServingSubscriber(topic, subscriberId, null, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); closesubStats.incrementFailedOps(); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { // remove the topic subscription from subscription channels subChannelMgr.remove(new TopicSubscriber(topic, subscriberId), channel); channel.write(PubSubResponseUtils.getSuccessResponse(request.getTxnId())); closesubStats.updateLatency(System.currentTimeMillis() - requestTime); } }, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); closesubStats.incrementFailedOps(); } }, null); } } ConsumeHandler.java000066400000000000000000000055461244507361200345720ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.jboss.netty.channel.Channel; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.ConsumeRequest; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.netty.ServerStats; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.server.netty.ServerStats.OpStats; import org.apache.hedwig.server.subscriptions.SubscriptionManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; public class ConsumeHandler extends BaseHandler { SubscriptionManager sm; Callback noopCallback = new NoopCallback(); final OpStats consumeStats = ServerStats.getInstance().getOpStats(OperationType.CONSUME); class NoopCallback implements Callback { @Override public void operationFailed(Object ctx, PubSubException exception) { consumeStats.incrementFailedOps(); } public void operationFinished(Object ctx, T resultOfOperation) { // we don't collect consume process time consumeStats.updateLatency(0); }; } @Override public void handleRequestAtOwner(PubSubRequest request, Channel channel) { if (!request.hasConsumeRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing consume request data"); consumeStats.incrementFailedOps(); return; } ConsumeRequest consumeRequest = request.getConsumeRequest(); sm.setConsumeSeqIdForSubscriber(request.getTopic(), consumeRequest.getSubscriberId(), consumeRequest.getMsgId(), noopCallback, null); } public ConsumeHandler(TopicManager tm, SubscriptionManager sm, ServerConfiguration cfg) { super(tm, cfg); this.sm = sm; } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/Handler.java000066400000000000000000000025711244507361200333120ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.jboss.netty.channel.Channel; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; public interface Handler { /** * Handle a request synchronously or asynchronously. After handling the * request, the appropriate response should be written on the given channel * * @param request * The request to handle * * @param channel * The channel on which to write the response */ public void handleRequest(final PubSubRequest request, final Channel channel); } NettyHandlerBean.java000066400000000000000000000027451244507361200350500ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.apache.hedwig.server.handlers.SubscriptionChannelManager; import org.apache.hedwig.server.jmx.HedwigMBeanInfo; public class NettyHandlerBean implements NettyHandlerMXBean, HedwigMBeanInfo { SubscriptionChannelManager subChannelMgr; public NettyHandlerBean(SubscriptionChannelManager subChannelMgr) { this.subChannelMgr = subChannelMgr; } @Override public String getName() { return "NettyHandlers"; } @Override public boolean isHidden() { return false; } @Override public int getNumSubscriptionChannels() { return subChannelMgr.getNumSubscriptionChannels(); } } NettyHandlerMXBean.java000066400000000000000000000020111244507361200352770ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; /** * Netty Handler MBean */ public interface NettyHandlerMXBean { /** * @return number of subscription channels */ public int getNumSubscriptionChannels(); } PublishHandler.java000066400000000000000000000103761244507361200345640ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.apache.hedwig.protocol.PubSubProtocol; import org.jboss.netty.channel.Channel; import org.apache.bookkeeper.util.MathUtils; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.netty.ServerStats; import org.apache.hedwig.server.netty.ServerStats.OpStats; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.server.persistence.PersistRequest; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; public class PublishHandler extends BaseHandler { private PersistenceManager persistenceMgr; private final OpStats pubStats; public PublishHandler(TopicManager topicMgr, PersistenceManager persistenceMgr, ServerConfiguration cfg) { super(topicMgr, cfg); this.persistenceMgr = persistenceMgr; this.pubStats = ServerStats.getInstance().getOpStats(OperationType.PUBLISH); } @Override public void handleRequestAtOwner(final PubSubRequest request, final Channel channel) { if (!request.hasPublishRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing publish request data"); pubStats.incrementFailedOps(); return; } Message msgToSerialize = Message.newBuilder(request.getPublishRequest().getMsg()).setSrcRegion( cfg.getMyRegionByteString()).build(); final long requestTime = MathUtils.now(); PersistRequest persistRequest = new PersistRequest(request.getTopic(), msgToSerialize, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); pubStats.incrementFailedOps(); } @Override public void operationFinished(Object ctx, PubSubProtocol.MessageSeqId resultOfOperation) { channel.write(getSuccessResponse(request.getTxnId(), resultOfOperation)); pubStats.updateLatency(MathUtils.now() - requestTime); } }, null); persistenceMgr.persistMessage(persistRequest); } private static PubSubProtocol.PubSubResponse getSuccessResponse(long txnId, PubSubProtocol.MessageSeqId publishedMessageSeqId) { if (null == publishedMessageSeqId) { return PubSubResponseUtils.getSuccessResponse(txnId); } PubSubProtocol.PublishResponse publishResponse = PubSubProtocol.PublishResponse.newBuilder().setPublishedMsgId(publishedMessageSeqId).build(); PubSubProtocol.ResponseBody responseBody = PubSubProtocol.ResponseBody.newBuilder().setPublishResponse(publishResponse).build(); return PubSubProtocol.PubSubResponse.newBuilder(). setProtocolVersion(PubSubResponseUtils.serverVersion). setStatusCode(PubSubProtocol.StatusCode.SUCCESS).setTxnId(txnId). setResponseBody(responseBody).build(); } } SubscribeHandler.java000066400000000000000000000276351244507361200351050ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import com.google.protobuf.ByteString; import org.apache.bookkeeper.util.MathUtils; import org.apache.bookkeeper.util.ReflectionUtils; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ServerNotResponsibleForTopicException; import org.apache.hedwig.filter.PipelineFilter; import org.apache.hedwig.filter.ServerMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.ResponseBody; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeResponse; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.delivery.ChannelEndPoint; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.netty.ServerStats; import org.apache.hedwig.server.netty.ServerStats.OpStats; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.subscriptions.SubscriptionManager; import org.apache.hedwig.server.subscriptions.AllToAllTopologyFilter; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; public class SubscribeHandler extends BaseHandler { static Logger logger = LoggerFactory.getLogger(SubscribeHandler.class); private final DeliveryManager deliveryMgr; private final PersistenceManager persistenceMgr; private final SubscriptionManager subMgr; private final SubscriptionChannelManager subChannelMgr; // op stats private final OpStats subStats; public SubscribeHandler(ServerConfiguration cfg, TopicManager topicMgr, DeliveryManager deliveryManager, PersistenceManager persistenceMgr, SubscriptionManager subMgr, SubscriptionChannelManager subChannelMgr) { super(topicMgr, cfg); this.deliveryMgr = deliveryManager; this.persistenceMgr = persistenceMgr; this.subMgr = subMgr; this.subChannelMgr = subChannelMgr; subStats = ServerStats.getInstance().getOpStats(OperationType.SUBSCRIBE); } @Override public void handleRequestAtOwner(final PubSubRequest request, final Channel channel) { if (!request.hasSubscribeRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing subscribe request data"); subStats.incrementFailedOps(); return; } final ByteString topic = request.getTopic(); MessageSeqId seqId; try { seqId = persistenceMgr.getCurrentSeqIdForTopic(topic); } catch (ServerNotResponsibleForTopicException e) { channel.write(PubSubResponseUtils.getResponseForException(e, request.getTxnId())).addListener( ChannelFutureListener.CLOSE); logger.error("Error getting current seq id for topic " + topic.toStringUtf8() + " when processing subscribe request (txnid:" + request.getTxnId() + ") :", e); subStats.incrementFailedOps(); ServerStats.getInstance().incrementRequestsRedirect(); return; } final SubscribeRequest subRequest = request.getSubscribeRequest(); final ByteString subscriberId = subRequest.getSubscriberId(); MessageSeqId lastSeqIdPublished = MessageSeqId.newBuilder(seqId).setLocalComponent(seqId.getLocalComponent()).build(); final long requestTime = MathUtils.now(); subMgr.serveSubscribeRequest(topic, subRequest, lastSeqIdPublished, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())).addListener( ChannelFutureListener.CLOSE); logger.error("Error serving subscribe request (" + request.getTxnId() + ") for (topic: " + topic.toStringUtf8() + " , subscriber: " + subscriberId.toStringUtf8() + ")", exception); subStats.incrementFailedOps(); } @Override public void operationFinished(Object ctx, final SubscriptionData subData) { TopicSubscriber topicSub = new TopicSubscriber(topic, subscriberId); synchronized (channel) { if (!channel.isConnected()) { // channel got disconnected while we were processing the // subscribe request, // nothing much we can do in this case subStats.incrementFailedOps(); return; } } // initialize the message filter PipelineFilter filter = new PipelineFilter(); try { // the filter pipeline should be // 1) AllToAllTopologyFilter to filter cross-region messages filter.addLast(new AllToAllTopologyFilter()); // 2) User-Customized MessageFilter if (subData.hasPreferences() && subData.getPreferences().hasMessageFilter()) { String messageFilterName = subData.getPreferences().getMessageFilter(); filter.addLast(ReflectionUtils.newInstance(messageFilterName, ServerMessageFilter.class)); } // initialize the filter filter.initialize(cfg.getConf()); filter.setSubscriptionPreferences(topic, subscriberId, subData.getPreferences()); } catch (RuntimeException re) { String errMsg = "RuntimeException caught when instantiating message filter for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ")." + "It might be introduced by programming error in message filter."; logger.error(errMsg, re); PubSubException pse = new PubSubException.InvalidMessageFilterException(errMsg, re); subStats.incrementFailedOps(); // we should not close the subscription channel, just response error // client decide to close it or not. channel.write(PubSubResponseUtils.getResponseForException(pse, request.getTxnId())); return; } catch (Throwable t) { String errMsg = "Failed to instantiate message filter for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ")."; logger.error(errMsg, t); PubSubException pse = new PubSubException.InvalidMessageFilterException(errMsg, t); subStats.incrementFailedOps(); channel.write(PubSubResponseUtils.getResponseForException(pse, request.getTxnId())) .addListener(ChannelFutureListener.CLOSE); return; } boolean forceAttach = false; if (subRequest.hasForceAttach()) { forceAttach = subRequest.getForceAttach(); } // Try to store the subscription channel for the topic subscriber Channel oldChannel = subChannelMgr.put(topicSub, channel, forceAttach); if (null != oldChannel) { PubSubException pse = new PubSubException.TopicBusyException( "Subscriber " + subscriberId.toStringUtf8() + " for topic " + topic.toStringUtf8() + " is already being served on a different channel " + oldChannel + "."); subStats.incrementFailedOps(); channel.write(PubSubResponseUtils.getResponseForException(pse, request.getTxnId())) .addListener(ChannelFutureListener.CLOSE); return; } // want to start 1 ahead of the consume ptr MessageSeqId lastConsumedSeqId = subData.getState().getMsgId(); MessageSeqId seqIdToStartFrom = MessageSeqId.newBuilder(lastConsumedSeqId).setLocalComponent( lastConsumedSeqId.getLocalComponent() + 1).build(); deliveryMgr.startServingSubscription(topic, subscriberId, subData.getPreferences(), seqIdToStartFrom, new ChannelEndPoint(channel), filter, new Callback() { @Override public void operationFinished(Object ctx, Void result) { // First write success and then tell the delivery manager, // otherwise the first message might go out before the response // to the subscribe SubscribeResponse.Builder subRespBuilder = SubscribeResponse.newBuilder() .setPreferences(subData.getPreferences()); ResponseBody respBody = ResponseBody.newBuilder() .setSubscribeResponse(subRespBuilder).build(); channel.write(PubSubResponseUtils.getSuccessResponse(request.getTxnId(), respBody)); logger.info("Subscribe request (" + request.getTxnId() + ") for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ") from channel " + channel.getRemoteAddress() + " succeed - its subscription data is " + SubscriptionStateUtils.toString(subData)); subStats.updateLatency(MathUtils.now() - requestTime); } @Override public void operationFailed(Object ctx, PubSubException exception) { // would not happened } }, null); } }, null); } } SubscriptionChannelManager.java000066400000000000000000000213731244507361200371270ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; public class SubscriptionChannelManager implements ChannelDisconnectListener { static Logger logger = LoggerFactory.getLogger(SubscriptionChannelManager.class); static class CloseSubscriptionListener implements ChannelFutureListener { final TopicSubscriber ts; CloseSubscriptionListener(TopicSubscriber topicSubscriber) { this.ts = topicSubscriber; } @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { logger.warn("Failed to write response to close old subscription {}.", ts); } else { logger.debug("Close old subscription {} succeed.", ts); } } }; final List listeners; public interface SubChannelDisconnectedListener { /** * Act on a particular topicSubscriber being disconnected * @param topicSubscriber */ public void onSubChannelDisconnected(TopicSubscriber topicSubscriber); } final ConcurrentHashMap sub2Channel; final ConcurrentHashMap> channel2sub; public SubscriptionChannelManager() { sub2Channel = new ConcurrentHashMap(); channel2sub = new ConcurrentHashMap>(); listeners = new ArrayList(); } public void addSubChannelDisconnectedListener(SubChannelDisconnectedListener listener) { if (null != listener) { listeners.add(listener); } } @Override public void channelDisconnected(Channel channel) { // Evils of synchronized programming: there is a race between a channel // getting disconnected, and us adding it to the maps when a subscribe // succeeds Set topicSubs; synchronized (channel) { topicSubs = channel2sub.remove(channel); } if (topicSubs != null) { for (TopicSubscriber topicSub : topicSubs) { logger.info("Subscription channel {} for {} is disconnected.", va(channel.getRemoteAddress(), topicSub)); // remove entry only currently mapped to given value. sub2Channel.remove(topicSub, channel); for (SubChannelDisconnectedListener listener : listeners) { listener.onSubChannelDisconnected(topicSub); } } } } public int getNumSubscriptionChannels() { return channel2sub.size(); } public int getNumSubscriptions() { return sub2Channel.size(); } /** * Put topicSub on Channel channel. * * @param topicSub * Topic Subscription * @param channel * Netty channel * @param mode * Create or Attach mode * @return null succeed, otherwise the old existed channel. */ public Channel put(TopicSubscriber topicSub, Channel channel, boolean forceAttach) { // race with channel getting disconnected while we are adding it // to the 2 maps synchronized (channel) { Channel oldChannel = sub2Channel.putIfAbsent(topicSub, channel); // if a subscribe request send from same channel, // we treated it a success action. if (null != oldChannel && !oldChannel.equals(channel)) { boolean subSuccess = false; if (forceAttach) { // it is safe to close old subscription here since the new subscription // has come from other channel succeed. synchronized (oldChannel) { Set oldTopicSubs = channel2sub.get(oldChannel); if (null != oldTopicSubs) { if (!oldTopicSubs.remove(topicSub)) { logger.warn("Failed to remove old subscription ({}) due to it isn't on channel ({}).", va(topicSub, oldChannel)); } else if (oldTopicSubs.isEmpty()) { channel2sub.remove(oldChannel); } } } PubSubResponse resp = PubSubResponseUtils.getResponseForSubscriptionEvent( topicSub.getTopic(), topicSub.getSubscriberId(), SubscriptionEvent.SUBSCRIPTION_FORCED_CLOSED ); oldChannel.write(resp).addListener(new CloseSubscriptionListener(topicSub)); logger.info("Subscribe request for ({}) from channel ({}) closes old subscripiton on channel ({}).", va(topicSub, channel, oldChannel)); // try replace the oldChannel // if replace failure, it migth caused because channelDisconnect callback // has removed the old channel. if (!sub2Channel.replace(topicSub, oldChannel, channel)) { // try to add it now. // if add failure, it means other one has obtained the channel oldChannel = sub2Channel.putIfAbsent(topicSub, channel); if (null == oldChannel) { subSuccess = true; } } else { subSuccess = true; } } if (!subSuccess) { logger.error("Error serving subscribe request for ({}) from ({}) since it already served on ({}).", va(topicSub, channel, oldChannel)); return oldChannel; } } // channel2sub is just a cache, so we can add to it // without synchronization Set topicSubs = channel2sub.get(channel); if (null == topicSubs) { topicSubs = new HashSet(); channel2sub.put(channel, topicSubs); } topicSubs.add(topicSub); return null; } } /** * Remove topicSub from Channel channel * * @param topicSub * Topic Subscription * @param channel * Netty channel */ public void remove(TopicSubscriber topicSub, Channel channel) { synchronized (channel) { Set topicSubs = channel2sub.get(channel); if (null != topicSubs) { if (!topicSubs.remove(topicSub)) { logger.warn("Failed to remove subscription ({}) due to it isn't on channel ({}).", va(topicSub, channel)); } else if (topicSubs.isEmpty()) { channel2sub.remove(channel); } } if (!sub2Channel.remove(topicSub, channel)) { logger.warn("Failed to remove channel ({}) due to it isn't ({})'s channel.", va(channel, topicSub)); } } } } UnsubscribeHandler.java000066400000000000000000000115661244507361200354440ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import org.jboss.netty.channel.Channel; import com.google.protobuf.ByteString; import org.apache.bookkeeper.util.MathUtils; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.UnsubscribeRequest; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.netty.ServerStats; import org.apache.hedwig.server.netty.ServerStats.OpStats; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.server.subscriptions.SubscriptionManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; import static org.apache.hedwig.util.VarArgs.va; public class UnsubscribeHandler extends BaseHandler { SubscriptionManager subMgr; DeliveryManager deliveryMgr; SubscriptionChannelManager subChannelMgr; // op stats final OpStats unsubStats; public UnsubscribeHandler(ServerConfiguration cfg, TopicManager tm, SubscriptionManager subMgr, DeliveryManager deliveryMgr, SubscriptionChannelManager subChannelMgr) { super(tm, cfg); this.subMgr = subMgr; this.deliveryMgr = deliveryMgr; this.subChannelMgr = subChannelMgr; unsubStats = ServerStats.getInstance().getOpStats(OperationType.UNSUBSCRIBE); } @Override public void handleRequestAtOwner(final PubSubRequest request, final Channel channel) { if (!request.hasUnsubscribeRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing unsubscribe request data"); unsubStats.incrementFailedOps(); return; } final UnsubscribeRequest unsubRequest = request.getUnsubscribeRequest(); final ByteString topic = request.getTopic(); final ByteString subscriberId = unsubRequest.getSubscriberId(); final long requestTime = MathUtils.now(); subMgr.unsubscribe(topic, subscriberId, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); unsubStats.incrementFailedOps(); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { // we should not close the channel in delivery manager // since client waits the response for closeSubscription request // client side would close the channel deliveryMgr.stopServingSubscriber(topic, subscriberId, null, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); unsubStats.incrementFailedOps(); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { // remove the topic subscription from subscription channels subChannelMgr.remove(new TopicSubscriber(topic, subscriberId), channel); channel.write(PubSubResponseUtils.getSuccessResponse(request.getTxnId())); unsubStats.updateLatency(System.currentTimeMillis() - requestTime); } }, ctx); } }, null); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/jmx/000077500000000000000000000000001244507361200300635ustar00rootroot00000000000000HedwigJMXService.java000066400000000000000000000022451244507361200337610ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/jmx/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.jmx; /** * An implementor of this interface is basiclly responsible for jmx beans. */ public interface HedwigJMXService { /** * register jmx * * @param parent * Parent JMX Bean */ public void registerJMX(HedwigMBeanInfo parent); /** * unregister jmx */ public void unregisterJMX(); } HedwigMBeanInfo.java000066400000000000000000000017361244507361200336040ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/jmx/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.jmx; import org.apache.zookeeper.jmx.ZKMBeanInfo; /** * Hedwig MBean info interface. */ public interface HedwigMBeanInfo extends ZKMBeanInfo { } HedwigMBeanRegistry.java000066400000000000000000000030071244507361200345120ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/jmx/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.jmx; import javax.management.MalformedObjectNameException; import javax.management.ObjectName; import org.apache.bookkeeper.jmx.BKMBeanRegistry; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This class provides a unified interface for registering/unregistering of * Hedwig MBeans with the platform MBean server. */ public class HedwigMBeanRegistry extends BKMBeanRegistry { static final String SERVICE = "org.apache.HedwigServer"; static HedwigMBeanRegistry instance = new HedwigMBeanRegistry(); public static HedwigMBeanRegistry getInstance(){ return instance; } @Override protected String getDomainName() { return SERVICE; } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/meta/000077500000000000000000000000001244507361200302135ustar00rootroot00000000000000FactoryLayout.java000066400000000000000000000126341244507361200336120ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/metapackage org.apache.hedwig.server.meta; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.BufferedReader; import java.io.IOException; import java.io.StringReader; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.TextFormat; import com.google.protobuf.InvalidProtocolBufferException; import org.apache.hedwig.protocol.PubSubProtocol.ManagerMeta; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.zookeeper.ZkUtils; /** * This class encapsulates metadata manager layout information * that is persistently stored in zookeeper. * It provides parsing and serialization methods of such information. * */ public class FactoryLayout { static final Logger logger = LoggerFactory.getLogger(FactoryLayout.class); // metadata manager name public static final String NAME = "METADATA"; // Znode name to store layout information public static final String LAYOUT_ZNODE = "LAYOUT"; public static final String LSEP = "\n"; private ManagerMeta managerMeta; /** * Construct metadata manager factory layout. * * @param meta * Meta describes what kind of factory used. */ public FactoryLayout(ManagerMeta meta) { this.managerMeta = meta; } public static String getFactoryLayoutPath(StringBuilder sb, ServerConfiguration cfg) { return cfg.getZkManagersPrefix(sb).append("/").append(NAME) .append("/").append(LAYOUT_ZNODE).toString(); } public ManagerMeta getManagerMeta() { return managerMeta; } /** * Store the factory layout into zookeeper * * @param zk * ZooKeeper Handle * @param cfg * Server Configuration Object * @throws KeeperException * @throws IOException * @throws InterruptedException */ public void store(ZooKeeper zk, ServerConfiguration cfg) throws KeeperException, IOException, InterruptedException { String factoryLayoutPath = getFactoryLayoutPath(new StringBuilder(), cfg); byte[] layoutData = TextFormat.printToString(managerMeta).getBytes(); ZkUtils.createFullPathOptimistic(zk, factoryLayoutPath, layoutData, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } @Override public int hashCode() { return managerMeta.hashCode(); } @Override public boolean equals(Object o) { if (null == o || !(o instanceof FactoryLayout)) { return false; } FactoryLayout other = (FactoryLayout)o; return managerMeta.equals(other.managerMeta); } @Override public String toString() { return TextFormat.printToString(managerMeta); } /** * Read factory layout from zookeeper * * @param zk * ZooKeeper Client * @param cfg * Server configuration object * @return Factory layout, or null if none set in zookeeper */ public static FactoryLayout readLayout(final ZooKeeper zk, final ServerConfiguration cfg) throws IOException, KeeperException { String factoryLayoutPath = getFactoryLayoutPath(new StringBuilder(), cfg); byte[] layoutData; try { layoutData = zk.getData(factoryLayoutPath, false, null); } catch (KeeperException.NoNodeException nne) { return null; } catch (InterruptedException ie) { throw new IOException(ie); } ManagerMeta meta; try { BufferedReader reader = new BufferedReader( new StringReader(new String(layoutData))); ManagerMeta.Builder metaBuilder = ManagerMeta.newBuilder(); TextFormat.merge(reader, metaBuilder); meta = metaBuilder.build(); } catch (InvalidProtocolBufferException ipbe) { throw new IOException("Corrupted factory layout : ", ipbe); } return new FactoryLayout(meta); } /** * Remove the factory layout from ZooKeeper. * * @param zk * ZooKeeper instance * @param cfg * Server configuration object * @throws KeeperException * @throws InterruptedException */ public static void deleteLayout(ZooKeeper zk, ServerConfiguration cfg) throws KeeperException, InterruptedException { String factoryLayoutPath = getFactoryLayoutPath(new StringBuilder(), cfg); zk.delete(factoryLayoutPath, -1); } } MetadataManagerFactory.java000066400000000000000000000175341244507361200353540ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.meta; import java.io.IOException; import java.util.Iterator; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.bookkeeper.util.ReflectionUtils; import org.apache.hedwig.protocol.PubSubProtocol.ManagerMeta; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; import com.google.protobuf.ByteString; /** * Metadata Manager used to manage metadata used by hedwig. */ public abstract class MetadataManagerFactory { static final Logger LOG = LoggerFactory.getLogger(MetadataManagerFactory.class); /** * Return current factory version. * * @return current version used by factory. */ public abstract int getCurrentVersion(); /** * Initialize the metadata manager factory with given * configuration and version. * * @param cfg * Server configuration object * @param zk * ZooKeeper handler * @param version * Manager version * @return metadata manager factory * @throws IOException when fail to initialize the manager. */ protected abstract MetadataManagerFactory initialize( ServerConfiguration cfg, ZooKeeper zk, int version) throws IOException; /** * Uninitialize the factory. * * @throws IOException when fail to shutdown the factory. */ public abstract void shutdown() throws IOException; /** * Iterate over the topics list. * Used by HedwigConsole to list available topics. * * @return iterator of the topics list. * @throws IOException */ public abstract Iterator getTopics() throws IOException; /** * Create topic persistence manager. * * @return topic persistence manager */ public abstract TopicPersistenceManager newTopicPersistenceManager(); /** * Create subscription data manager. * * @return subscription data manager. */ public abstract SubscriptionDataManager newSubscriptionDataManager(); /** * Create topic ownership manager. * * @return topic ownership manager. */ public abstract TopicOwnershipManager newTopicOwnershipManager(); /** * Format the metadata for Hedwig. * * @param cfg * Configuration instance * @param zk * ZooKeeper instance */ public abstract void format(ServerConfiguration cfg, ZooKeeper zk) throws IOException; /** * Create new Metadata Manager Factory. * * @param conf * Configuration Object. * @param zk * ZooKeeper Client Handle, talk to zk to know which manager factory is used. * @return new manager factory. * @throws IOException */ public static MetadataManagerFactory newMetadataManagerFactory( final ServerConfiguration conf, final ZooKeeper zk) throws IOException, KeeperException, InterruptedException { Class factoryClass; try { factoryClass = conf.getMetadataManagerFactoryClass(); } catch (Exception e) { throw new IOException("Failed to get metadata manager factory class from configuration : ", e); } // check that the configured manager is // compatible with the existing layout FactoryLayout layout = FactoryLayout.readLayout(zk, conf); if (layout == null) { // no existing layout return createMetadataManagerFactory(conf, zk, factoryClass); } LOG.debug("read meta layout {}", layout); if (factoryClass != null && !layout.getManagerMeta().getManagerImpl().equals(factoryClass.getName())) { throw new IOException("Configured metadata manager factory " + factoryClass.getName() + " does not match existing factory " + layout.getManagerMeta().getManagerImpl()); } if (factoryClass == null) { // no factory specified in configuration String factoryClsName = layout.getManagerMeta().getManagerImpl(); try { Class theCls = Class.forName(factoryClsName); if (!MetadataManagerFactory.class.isAssignableFrom(theCls)) { throw new IOException("Wrong metadata manager factory " + factoryClsName); } factoryClass = theCls.asSubclass(MetadataManagerFactory.class); } catch (ClassNotFoundException cnfe) { throw new IOException("No class found to instantiate metadata manager factory " + factoryClsName); } } // instantiate the metadata manager factory MetadataManagerFactory managerFactory; try { managerFactory = ReflectionUtils.newInstance(factoryClass); } catch (Throwable t) { throw new IOException("Failed to instantiate metadata manager factory : " + factoryClass, t); } return managerFactory.initialize(conf, zk, layout.getManagerMeta().getManagerVersion()); } /** * Create metadata manager factory and write factory layout to ZooKeeper. * * @param cfg * Server Configuration object. * @param zk * ZooKeeper instance. * @param factoryClass * Metadata Manager Factory Class. * @return metadata manager factory instance. * @throws IOException * @throws KeeperException * @throws InterruptedException */ public static MetadataManagerFactory createMetadataManagerFactory( ServerConfiguration cfg, ZooKeeper zk, Class factoryClass) throws IOException, KeeperException, InterruptedException { // use default manager if no one provided if (factoryClass == null) { factoryClass = ZkMetadataManagerFactory.class; } MetadataManagerFactory managerFactory; try { managerFactory = ReflectionUtils.newInstance(factoryClass); } catch (Throwable t) { throw new IOException("Fail to instantiate metadata manager factory : " + factoryClass, t); } ManagerMeta managerMeta = ManagerMeta.newBuilder() .setManagerImpl(factoryClass.getName()) .setManagerVersion(managerFactory.getCurrentVersion()) .build(); FactoryLayout layout = new FactoryLayout(managerMeta); try { layout.store(zk, cfg); } catch (KeeperException.NodeExistsException nee) { FactoryLayout layout2 = FactoryLayout.readLayout(zk, cfg); if (!layout2.equals(layout)) { throw new IOException("Contention writing to layout to zookeeper, " + " other layout " + layout2 + " is incompatible with our " + "layout " + layout); } } return managerFactory.initialize(cfg, zk, layout.getManagerMeta().getManagerVersion()); } } MsMetadataManagerFactory.java000066400000000000000000001177571244507361200356640ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.meta; import java.io.IOException; import java.io.UnsupportedEncodingException; import java.util.Iterator; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import com.google.protobuf.ByteString; import com.google.protobuf.TextFormat; import com.google.protobuf.TextFormat.ParseException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.bookkeeper.metastore.MetaStore; import org.apache.bookkeeper.metastore.MetastoreCallback; import org.apache.bookkeeper.metastore.MetastoreCursor; import org.apache.bookkeeper.metastore.MetastoreCursor.ReadEntriesCallback; import org.apache.bookkeeper.metastore.MetastoreException; import org.apache.bookkeeper.metastore.MetastoreFactory; import org.apache.bookkeeper.metastore.MetastoreScannableTable; import org.apache.bookkeeper.metastore.MetastoreScannableTable.Order; import org.apache.bookkeeper.metastore.MetastoreTable; import org.apache.bookkeeper.metastore.MetastoreUtils; import static org.apache.bookkeeper.metastore.MetastoreTable.*; import org.apache.bookkeeper.metastore.MetastoreTableItem; import org.apache.bookkeeper.metastore.MSException; import org.apache.bookkeeper.metastore.Value; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionState; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.topics.HubInfo; import org.apache.hedwig.util.Callback; import org.apache.zookeeper.ZooKeeper; /** * MetadataManagerFactory for plug-in metadata storage. */ public class MsMetadataManagerFactory extends MetadataManagerFactory { protected final static Logger logger = LoggerFactory.getLogger(MsMetadataManagerFactory.class); static final String UTF8 = "UTF-8"; static final int CUR_VERSION = 1; static final String OWNER_TABLE_NAME = "owner"; static final String PERSIST_TABLE_NAME = "persist"; static final String SUB_TABLE_NAME = "sub"; static class SyncResult { T value; int rc; boolean finished = false; public synchronized void complete(int rc, T value) { this.rc = rc; this.value = value; finished = true; notify(); } public synchronized void block() throws InterruptedException { while (!finished) { wait(); } } public int getReturnCode() { return rc; } public T getValue() { return value; } } MetaStore metastore; MetastoreTable ownerTable; MetastoreTable persistTable; MetastoreScannableTable subTable; ServerConfiguration cfg; @Override public MetadataManagerFactory initialize(ServerConfiguration cfg, ZooKeeper zk, int version) throws IOException { if (CUR_VERSION != version) { throw new IOException("Incompatible MsMetadataManagerFactory version " + version + " found, expected version " + CUR_VERSION); } this.cfg = cfg; try { metastore = MetastoreFactory.createMetaStore(cfg.getMetastoreImplClass()); // TODO: need to store metastore class and version in some place. metastore.init(cfg.getConf(), metastore.getVersion()); } catch (Exception e) { throw new IOException("Load metastore failed : ", e); } try { ownerTable = metastore.createTable(OWNER_TABLE_NAME); if (ownerTable == null) { throw new IOException("create owner table failed"); } persistTable = metastore.createTable(PERSIST_TABLE_NAME); if (persistTable == null) { throw new IOException("create persistence table failed"); } subTable = metastore.createScannableTable(SUB_TABLE_NAME); if (subTable == null) { throw new IOException("create subscription table failed"); } } catch (MetastoreException me) { throw new IOException("Failed to create tables : ", me); } return this; } @Override public int getCurrentVersion() { return CUR_VERSION; } @Override public void shutdown() { if (metastore == null) { return; } if (ownerTable != null) { ownerTable.close(); ownerTable = null; } if (persistTable != null) { persistTable.close(); persistTable = null; } if (subTable != null) { subTable.close(); subTable = null; } metastore.close(); metastore = null; } @Override public Iterator getTopics() throws IOException { SyncResult syn = new SyncResult(); persistTable.openCursor(NON_FIELDS, new MetastoreCallback() { public void complete(int rc, MetastoreCursor cursor, Object ctx) { @SuppressWarnings("unchecked") SyncResult syn = (SyncResult) ctx; syn.complete(rc, cursor); } }, syn); try { syn.block(); } catch (Exception e) { throw new IOException("Interrupted on getting topics list : ", e); } if (syn.getReturnCode() != MSException.Code.OK.getCode()) { throw new IOException("Failed to get topics : ", MSException.create( MSException.Code.get(syn.getReturnCode()), "")); } final MetastoreCursor cursor = syn.getValue(); return new Iterator() { Iterator itemIter = null; @Override public boolean hasNext() { while (null == itemIter || !itemIter.hasNext()) { if (!cursor.hasMoreEntries()) { return false; } try { itemIter = cursor.readEntries(cfg.getMetastoreMaxEntriesPerScan()); } catch (MSException mse) { logger.warn("Interrupted when iterating the topics list : ", mse); return false; } } return true; } @Override public ByteString next() { MetastoreTableItem t = itemIter.next(); return ByteString.copyFromUtf8(t.getKey()); } @Override public void remove() { throw new UnsupportedOperationException("Doesn't support remove topic from topic iterator."); } }; } @Override public TopicOwnershipManager newTopicOwnershipManager() { return new MsTopicOwnershipManagerImpl(ownerTable); } static class MsTopicOwnershipManagerImpl implements TopicOwnershipManager { static final String OWNER_FIELD = "owner"; final MetastoreTable ownerTable; MsTopicOwnershipManagerImpl(MetastoreTable ownerTable) { this.ownerTable = ownerTable; } @Override public void close() throws IOException { // do nothing } @Override public void readOwnerInfo(final ByteString topic, final Callback> callback, Object ctx) { ownerTable.get(topic.toStringUtf8(), new MetastoreCallback>() { @Override public void complete(int rc, Versioned value, Object ctx) { if (MSException.Code.NoKey.getCode() == rc) { callback.operationFinished(ctx, null); return; } if (MSException.Code.OK.getCode() != rc) { logErrorAndFinishOperation("Could not read ownership for topic " + topic.toStringUtf8(), callback, ctx, rc); return; } HubInfo owner = null; try { byte[] data = value.getValue().getField(OWNER_FIELD); if (data != null) { owner = HubInfo.parse(new String(data)); } } catch (HubInfo.InvalidHubInfoException ihie) { logger.warn("Failed to parse hub info for topic " + topic.toStringUtf8(), ihie); } Version version = value.getVersion(); callback.operationFinished(ctx, new Versioned(owner, version)); } }, ctx); } @Override public void writeOwnerInfo(final ByteString topic, final HubInfo owner, final Version version, final Callback callback, Object ctx) { Value value = new Value(); value.setField(OWNER_FIELD, owner.toString().getBytes()); ownerTable.put(topic.toStringUtf8(), value, version, new MetastoreCallback() { @Override public void complete(int rc, Version ver, Object ctx) { if (MSException.Code.OK.getCode() == rc) { callback.operationFinished(ctx, ver); return; } else if (MSException.Code.NoKey.getCode() == rc) { // no node callback.operationFailed( ctx, PubSubException.create(StatusCode.NO_TOPIC_OWNER_INFO, "No owner info found for topic " + topic.toStringUtf8())); return; } else if (MSException.Code.KeyExists.getCode() == rc) { // key exists callback.operationFailed( ctx, PubSubException.create(StatusCode.TOPIC_OWNER_INFO_EXISTS, "Owner info of topic " + topic.toStringUtf8() + " existed.")); return; } else if (MSException.Code.BadVersion.getCode() == rc) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to update owner info of topic " + topic.toStringUtf8())); return; } else { logErrorAndFinishOperation("Failed to update ownership of topic " + topic.toStringUtf8() + " to " + owner, callback, ctx, rc); return; } } }, ctx); } @Override public void deleteOwnerInfo(final ByteString topic, Version version, final Callback callback, Object ctx) { ownerTable.remove(topic.toStringUtf8(), version, new MetastoreCallback() { @Override public void complete(int rc, Void value, Object ctx) { if (MSException.Code.OK.getCode() == rc) { logger.debug("Successfully deleted owner info for topic {}", topic.toStringUtf8()); callback.operationFinished(ctx, null); return; } else if (MSException.Code.NoKey.getCode() == rc) { // no node callback.operationFailed( ctx, PubSubException.create(StatusCode.NO_TOPIC_OWNER_INFO, "No owner info found for topic " + topic.toStringUtf8())); return; } else if (MSException.Code.BadVersion.getCode() == rc) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to delete owner info of topic " + topic.toStringUtf8())); return; } else { logErrorAndFinishOperation("Failed to delete owner info for topic " + topic.toStringUtf8(), callback, ctx, rc); return; } } }, ctx); } } @Override public TopicPersistenceManager newTopicPersistenceManager() { return new MsTopicPersistenceManagerImpl(persistTable); } static class MsTopicPersistenceManagerImpl implements TopicPersistenceManager { static final String PERSIST_FIELD = "prst"; final MetastoreTable persistTable; MsTopicPersistenceManagerImpl(MetastoreTable persistTable) { this.persistTable = persistTable; } @Override public void close() throws IOException { // do nothing } @Override public void readTopicPersistenceInfo(final ByteString topic, final Callback> callback, Object ctx) { persistTable.get(topic.toStringUtf8(), new MetastoreCallback>() { @Override public void complete(int rc, Versioned value, Object ctx) { if (MSException.Code.OK.getCode() == rc) { byte[] data = value.getValue().getField(PERSIST_FIELD); if (data != null) { parseAndReturnTopicLedgerRanges(topic, data, value.getVersion(), callback, ctx); } else { // null data is same as NoKey callback.operationFinished(ctx, null); } } else if (MSException.Code.NoKey.getCode() == rc) { callback.operationFinished(ctx, null); } else { logErrorAndFinishOperation("Could not read ledgers node for topic " + topic.toStringUtf8(), callback, ctx, rc); } } }, ctx); } /** * Parse ledger ranges data and return it thru callback. * * @param topic * Topic name * @param data * Topic Ledger Ranges data * @param version * Version of the topic ledger ranges data * @param callback * Callback to return ledger ranges * @param ctx * Context of the callback */ private void parseAndReturnTopicLedgerRanges(ByteString topic, byte[] data, Version version, Callback> callback, Object ctx) { try { LedgerRanges.Builder rangesBuilder = LedgerRanges.newBuilder(); TextFormat.merge(new String(data, UTF8), rangesBuilder); LedgerRanges lr = rangesBuilder.build(); Versioned ranges = new Versioned(lr, version); callback.operationFinished(ctx, ranges); } catch (ParseException e) { StringBuilder sb = new StringBuilder(); sb.append("Ledger ranges for topic ").append(topic.toStringUtf8()) .append(" could not be deserialized."); String msg = sb.toString(); logger.error(msg, e); callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); } catch (UnsupportedEncodingException uee) { StringBuilder sb = new StringBuilder(); sb.append("Ledger ranges for topic ").append(topic.toStringUtf8()).append(" is not UTF-8 encoded."); String msg = sb.toString(); logger.error(msg, uee); callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); } } @Override public void writeTopicPersistenceInfo(final ByteString topic, LedgerRanges ranges, final Version version, final Callback callback, Object ctx) { Value value = new Value(); value.setField(PERSIST_FIELD, TextFormat.printToString(ranges).getBytes()); persistTable.put(topic.toStringUtf8(), value, version, new MetastoreCallback() { @Override public void complete(int rc, Version ver, Object ctx) { if (MSException.Code.OK.getCode() == rc) { callback.operationFinished(ctx, ver); return; } else if (MSException.Code.NoKey.getCode() == rc) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_TOPIC_PERSISTENCE_INFO, "No persistence info found for topic " + topic.toStringUtf8())); return; } else if (MSException.Code.KeyExists.getCode() == rc) { // key exists callback.operationFailed(ctx, PubSubException.create(StatusCode.TOPIC_PERSISTENCE_INFO_EXISTS, "Persistence info of topic " + topic.toStringUtf8() + " existed.")); return; } else if (MSException.Code.BadVersion.getCode() == rc) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to update persistence info of topic " + topic.toStringUtf8())); return; } else { logErrorAndFinishOperation("Could not write ledgers node for topic " + topic.toStringUtf8(), callback, ctx, rc); } } }, ctx); } @Override public void deleteTopicPersistenceInfo(final ByteString topic, final Version version, final Callback callback, Object ctx) { persistTable.remove(topic.toStringUtf8(), version, new MetastoreCallback() { @Override public void complete(int rc, Void value, Object ctx) { if (MSException.Code.OK.getCode() == rc) { logger.debug("Successfully deleted persistence info for topic {}.", topic.toStringUtf8()); callback.operationFinished(ctx, null); return; } else if (MSException.Code.NoKey.getCode() == rc) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_TOPIC_PERSISTENCE_INFO, "No persistence info found for topic " + topic.toStringUtf8())); return; } else if (MSException.Code.BadVersion.getCode() == rc) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to delete persistence info of topic " + topic.toStringUtf8())); return; } else { logErrorAndFinishOperation("Failed to delete persistence info topic: " + topic.toStringUtf8() + ", version: " + version, callback, ctx, rc, StatusCode.SERVICE_DOWN); return; } } }, ctx); } } @Override public SubscriptionDataManager newSubscriptionDataManager() { return new MsSubscriptionDataManagerImpl(cfg, subTable); } static class MsSubscriptionDataManagerImpl implements SubscriptionDataManager { static final String SUB_STATE_FIELD = "sub_state"; static final String SUB_PREFS_FIELD = "sub_preferences"; static final char TOPIC_SUB_FIRST_SEPARATOR = '\001'; static final char TOPIC_SUB_LAST_SEPARATOR = '\002'; final ServerConfiguration cfg; final MetastoreScannableTable subTable; MsSubscriptionDataManagerImpl(ServerConfiguration cfg, MetastoreScannableTable subTable) { this.cfg = cfg; this.subTable = subTable; } @Override public void close() throws IOException { // do nothing } private String getSubscriptionKey(ByteString topic, ByteString subscriberId) { return new StringBuilder(topic.toStringUtf8()).append(TOPIC_SUB_FIRST_SEPARATOR) .append(subscriberId.toStringUtf8()).toString(); } private Value subscriptionData2Value(SubscriptionData subData) { Value value = new Value(); if (subData.hasState()) { value.setField(SUB_STATE_FIELD, TextFormat.printToString(subData.getState()).getBytes()); } if (subData.hasPreferences()) { value.setField(SUB_PREFS_FIELD, TextFormat.printToString(subData.getPreferences()).getBytes()); } return value; } @Override public void createSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData subData, final Callback callback, Object ctx) { String key = getSubscriptionKey(topic, subscriberId); Value value = subscriptionData2Value(subData); subTable.put(key, value, Version.NEW, new MetastoreCallback() { @Override public void complete(int rc, Version ver, Object ctx) { if (rc == MSException.Code.OK.getCode()) { if (logger.isDebugEnabled()) { logger.debug("Successfully create subscription for topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8() + ", data: " + SubscriptionStateUtils.toString(subData)); } callback.operationFinished(ctx, ver); } else if (rc == MSException.Code.KeyExists.getCode()) { callback.operationFailed(ctx, PubSubException.create( StatusCode.SUBSCRIPTION_STATE_EXISTS, "Subscription data for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ") existed.")); return; } else { logErrorAndFinishOperation("Failed to create topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8() + ", data: " + SubscriptionStateUtils.toString(subData), callback, ctx, rc); } } }, ctx); } @Override public boolean isPartialUpdateSupported() { // TODO: Here we assume Metastore support partial update, but this // maybe incorrect. return true; } @Override public void replaceSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData subData, final Version version, final Callback callback, final Object ctx) { updateSubscriptionData(topic, subscriberId, subData, version, callback, ctx); } @Override public void updateSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData subData, final Version version, final Callback callback, final Object ctx) { String key = getSubscriptionKey(topic, subscriberId); Value value = subscriptionData2Value(subData); subTable.put(key, value, version, new MetastoreCallback() { @Override public void complete(int rc, Version version, Object ctx) { if (rc == MSException.Code.OK.getCode()) { if (logger.isDebugEnabled()) { logger.debug("Successfully updated subscription data for topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8() + ", data: " + SubscriptionStateUtils.toString(subData) + ", version: " + version); } callback.operationFinished(ctx, version); } else if (rc == MSException.Code.NoKey.getCode()) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_SUBSCRIPTION_STATE, "No subscription data found for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ").")); return; } else if (rc == MSException.Code.BadVersion.getCode()) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to update subscription data of topic " + topic.toStringUtf8() + " subscriberId " + subscriberId)); return; } else { logErrorAndFinishOperation( "Failed to update subscription data for topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8() + ", data: " + SubscriptionStateUtils.toString(subData) + ", version: " + version, callback, ctx, rc); } } }, ctx); } @Override public void deleteSubscriptionData(final ByteString topic, final ByteString subscriberId, Version version, final Callback callback, Object ctx) { String key = getSubscriptionKey(topic, subscriberId); subTable.remove(key, version, new MetastoreCallback() { @Override public void complete(int rc, Void value, Object ctx) { if (rc == MSException.Code.OK.getCode()) { logger.debug("Successfully delete subscription for topic: {}, subscriberId: {}.", topic.toStringUtf8(), subscriberId.toStringUtf8()); callback.operationFinished(ctx, null); return; } else if (rc == MSException.Code.BadVersion.getCode()) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to delete subscriptoin data of topic " + topic.toStringUtf8() + " subscriberId " + subscriberId)); return; } else if (rc == MSException.Code.NoKey.getCode()) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_SUBSCRIPTION_STATE, "No subscription data found for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ").")); return; } else { logErrorAndFinishOperation("Failed to delete subscription topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8(), callback, ctx, rc, StatusCode.SERVICE_DOWN); } } }, ctx); } private SubscriptionData value2SubscriptionData(Value value) throws ParseException, UnsupportedEncodingException { SubscriptionData.Builder builder = SubscriptionData.newBuilder(); byte[] stateData = value.getField(SUB_STATE_FIELD); if (null != stateData) { SubscriptionState.Builder stateBuilder = SubscriptionState.newBuilder(); TextFormat.merge(new String(stateData, UTF8), stateBuilder); SubscriptionState state = stateBuilder.build(); builder.setState(state); } byte[] prefsData = value.getField(SUB_PREFS_FIELD); if (null != prefsData) { SubscriptionPreferences.Builder preferencesBuilder = SubscriptionPreferences.newBuilder(); TextFormat.merge(new String(prefsData, UTF8), preferencesBuilder); SubscriptionPreferences preferences = preferencesBuilder.build(); builder.setPreferences(preferences); } return builder.build(); } @Override public void readSubscriptionData(final ByteString topic, final ByteString subscriberId, final Callback> callback, Object ctx) { String key = getSubscriptionKey(topic, subscriberId); subTable.get(key, new MetastoreCallback>() { @Override public void complete(int rc, Versioned value, Object ctx) { if (rc == MSException.Code.NoKey.getCode()) { callback.operationFinished(ctx, null); return; } if (rc != MSException.Code.OK.getCode()) { logErrorAndFinishOperation( "Could not read subscription data for topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8(), callback, ctx, rc); return; } try { Versioned subData = new Versioned( value2SubscriptionData(value.getValue()), value.getVersion()); if (logger.isDebugEnabled()) { logger.debug("Found subscription while acquiring topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8() + ", data: " + SubscriptionStateUtils.toString(subData.getValue()) + ", version: " + subData.getVersion()); } callback.operationFinished(ctx, subData); } catch (ParseException e) { StringBuilder sb = new StringBuilder(); sb.append("Failed to deserialize subscription data for topic:").append(topic.toStringUtf8()) .append(", subscriberId: ").append(subscriberId.toStringUtf8()); String msg = sb.toString(); logger.error(msg, e); callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); } catch (UnsupportedEncodingException uee) { StringBuilder sb = new StringBuilder(); sb.append("Subscription data for topic: ").append(topic.toStringUtf8()) .append(", subscriberId: ").append(subscriberId.toStringUtf8()) .append(" is not UFT-8 encoded"); String msg = sb.toString(); logger.error(msg, uee); callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); } } }, ctx); } private String getSubscriptionPrefix(ByteString topic, char sep) { return new StringBuilder(topic.toStringUtf8()).append(sep).toString(); } private void readSubscriptions(final ByteString topic, final int keyLength, final MetastoreCursor cursor, final Map> topicSubs, final Callback>> callback, Object ctx) { if (!cursor.hasMoreEntries()) { callback.operationFinished(ctx, topicSubs); return; } ReadEntriesCallback readCb = new ReadEntriesCallback() { @Override public void complete(int rc, Iterator items, Object ctx) { if (rc != MSException.Code.OK.getCode()) { logErrorAndFinishOperation("Could not read subscribers for cursor " + cursor, callback, ctx, rc); return; } while (items.hasNext()) { MetastoreTableItem item = items.next(); final ByteString subscriberId = ByteString.copyFromUtf8(item.getKey().substring(keyLength)); try { Versioned vv = item.getValue(); Versioned subData = new Versioned( value2SubscriptionData(vv.getValue()), vv.getVersion()); topicSubs.put(subscriberId, subData); } catch (ParseException e) { StringBuilder sb = new StringBuilder(); sb.append("Failed to deserialize subscription data for topic: ") .append(topic.toStringUtf8()).append(", subscriberId: ") .append(subscriberId.toStringUtf8()); String msg = sb.toString(); logger.error(msg, e); callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } catch (UnsupportedEncodingException e) { StringBuilder sb = new StringBuilder(); sb.append("Subscription data for topic: ").append(topic.toStringUtf8()) .append(", subscriberId: ").append(subscriberId.toStringUtf8()) .append(" is not UTF-8 encoded."); String msg = sb.toString(); logger.error(msg, e); callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } } readSubscriptions(topic, keyLength, cursor, topicSubs, callback, ctx); } }; cursor.asyncReadEntries(cfg.getMetastoreMaxEntriesPerScan(), readCb, ctx); } @Override public void readSubscriptions(final ByteString topic, final Callback>> callback, Object ctx) { final String firstKey = getSubscriptionPrefix(topic, TOPIC_SUB_FIRST_SEPARATOR); String lastKey = getSubscriptionPrefix(topic, TOPIC_SUB_LAST_SEPARATOR); subTable.openCursor(firstKey, true, lastKey, true, Order.ASC, ALL_FIELDS, new MetastoreCallback() { @Override public void complete(int rc, MetastoreCursor cursor, Object ctx) { if (rc != MSException.Code.OK.getCode()) { logErrorAndFinishOperation( "Could not read subscribers for topic " + topic.toStringUtf8(), callback, ctx, rc); return; } final Map> topicSubs = new ConcurrentHashMap>(); readSubscriptions(topic, firstKey.length(), cursor, topicSubs, callback, ctx); } }, ctx); } } /** * callback finish operation with exception specify by code, regardless of * the value of return code rc. */ private static void logErrorAndFinishOperation(String msg, Callback callback, Object ctx, int rc, StatusCode code) { logger.error(msg, MSException.create(MSException.Code.get(rc), "")); callback.operationFailed(ctx, PubSubException.create(code, msg)); } /** * callback finish operation with corresponding PubSubException converted * from return code rc. */ private static void logErrorAndFinishOperation(String msg, Callback callback, Object ctx, int rc) { StatusCode code; if (rc == MSException.Code.NoKey.getCode()) { code = StatusCode.NO_SUCH_TOPIC; } else if (rc == MSException.Code.ServiceDown.getCode()) { code = StatusCode.SERVICE_DOWN; } else { code = StatusCode.UNEXPECTED_CONDITION; } logErrorAndFinishOperation(msg, callback, ctx, rc, code); } @Override public void format(ServerConfiguration cfg, ZooKeeper zk) throws IOException { try { int maxEntriesPerScan = cfg.getMetastoreMaxEntriesPerScan(); // clean topic ownership table. logger.info("Cleaning topic ownership table ..."); MetastoreUtils.cleanTable(ownerTable, maxEntriesPerScan); logger.info("Cleaned topic ownership table successfully."); // clean topic subscription table. logger.info("Cleaning topic subscription table ..."); MetastoreUtils.cleanTable(subTable, maxEntriesPerScan); logger.info("Cleaned topic subscription table successfully."); // clean topic persistence info table. logger.info("Cleaning topic persistence info table ..."); MetastoreUtils.cleanTable(persistTable, maxEntriesPerScan); logger.info("Cleaned topic persistence info table successfully."); } catch (MSException mse) { throw new IOException("Exception when formatting hedwig metastore : ", mse); } catch (InterruptedException ie) { throw new IOException("Interrupted when formatting hedwig metastore : ", ie); } } } SubscriptionDataManager.java000066400000000000000000000141141244507361200355510ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.meta; import java.io.Closeable; import java.util.Map; import com.google.protobuf.ByteString; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.server.subscriptions.InMemorySubscriptionState; import org.apache.hedwig.util.Callback; /** * Manage subscription data. */ public interface SubscriptionDataManager extends Closeable { /** * Create subscription data. * * @param topic * Topic name * @param subscriberId * Subscriber id * @param data * Subscription data * @param callback * Callback when subscription state created. New version would be returned. * {@link PubSubException.SubscriptionStateExistsException} is returned when subscription state * existed before. * @param ctx * Context of the callback */ public void createSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData data, Callback callback, Object ctx); /** * Whether the metadata manager supports partial update. * * @return true if the metadata manager supports partial update. * otherwise, return false. */ public boolean isPartialUpdateSupported(); /** * Update subscription data. * * @param topic * Topic name * @param subscriberId * Subscriber id * @param dataToUpdate * Subscription data to update. So it is a partial data, which contains * the part of data to update. The implementation should not replace * existing subscription data with dataToUpdate directly. * E.g. if there is only state in it, you should update state only. * @param version * Current version of subscription data. * @param callback * Callback when subscription state updated. New version would be returned. * {@link PubSubException.BadVersionException} is returned when version doesn't match, * {@link PubSubException.NoSubscriptionStateException} is returned when no subscription state * is found. * @param ctx * Context of the callback */ public void updateSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData dataToUpdate, Version version, Callback callback, Object ctx); /** * Replace subscription data. * * @param topic * Topic name * @param subscriberId * Subscriber id * @param dataToReplace * Subscription data to replace. * @param version * Current version of subscription data. * @param callback * Callback when subscription state updated. New version would be returned. * {@link PubSubException.BadVersionException} is returned when version doesn't match, * {@link PubSubException.NoSubscriptionStateException} is returned when no subscription state * is found. * @param ctx * Context of the callback */ public void replaceSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData dataToReplace, Version version, Callback callback, Object ctx); /** * Remove subscription data. * * @param topic * Topic name * @param subscriberId * Subscriber id * @param version * Current version of subscription data. * @param callback * Callback when subscription state deleted * {@link PubSubException.BadVersionException} is returned when version doesn't match, * {@link PubSubException.NoSubscriptionStateException} is returned when no subscription state * is found. * @param ctx * Context of the callback */ public void deleteSubscriptionData(ByteString topic, ByteString subscriberId, Version version, Callback callback, Object ctx); /** * Read subscription data with version. * * @param topic * Topic Name * @param subscriberId * Subscriber id * @param callback * Callback when subscription data read. * Null is returned when no subscription data is found. * @param ctx * Context of the callback */ public void readSubscriptionData(ByteString topic, ByteString subscriberId, Callback> callback, Object ctx); /** * Read all subscriptions of a topic. * * @param topic * Topic name * @param callback * Callback to return subscriptions with version information * @param ctx * Contxt of the callback */ public void readSubscriptions(ByteString topic, Callback>> cb, Object ctx); } TopicOwnershipManager.java000066400000000000000000000101741244507361200352520ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.meta; import java.io.Closeable; import java.io.IOException; import java.util.Map; import com.google.protobuf.ByteString; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionState; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.subscriptions.InMemorySubscriptionState; import org.apache.hedwig.server.topics.HubInfo; import org.apache.hedwig.util.Callback; import org.apache.zookeeper.ZooKeeper; /** * Manage topic ownership */ public interface TopicOwnershipManager extends Closeable { /** * Read owner information of a topic. * * @param topic * Topic Name * @param callback * Callback to return hub info. If there is no owner info, return null; * If there is data but not valid owner info, return a Versioned object with null hub info; * If there is valid owner info, return versioned hub info. * @param ctx * Context of the callback */ public void readOwnerInfo(ByteString topic, Callback> callback, Object ctx); /** * Write owner info for a specified topic. * A new owner info would be created if there is no one existed before. * * @param topic * Topic Name * @param owner * Owner hub info * @param version * Current version of owner info * If version is {@link Version.NEW}, create owner info. * {@link PubSubException.TopicOwnerInfoExistsException} is returned when * owner info existed before. * Otherwise, the owner info is updated only when * provided version equals to its current version. * {@link PubSubException.BadVersionException} is returned when version doesn't match, * {@link PubSubException.NoTopicOwnerInfoException} is returned when no owner info * found to update. * @param callback * Callback when owner info updated. New version would be returned if succeed to write. * @param ctx * Context of the callback */ public void writeOwnerInfo(ByteString topic, HubInfo owner, Version version, Callback callback, Object ctx); /** * Delete owner info for a specified topic. * * @param topic * Topic Name * @param version * Current version of owner info * If version is {@link Version.ANY}, delete owner info no matter its current version. * Otherwise, the owner info is deleted only when * provided version equals to its current version. * @param callback * Callback when owner info deleted. * {@link PubSubException.NoTopicOwnerInfoException} is returned when no owner info. * {@link PubSubException.BadVersionException} is returned when version doesn't match. * @param ctx * Context of the callback. */ public void deleteOwnerInfo(ByteString topic, Version version, Callback callback, Object ctx); } TopicPersistenceManager.java000066400000000000000000000076271244507361200355710ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.meta; import java.io.Closeable; import com.google.protobuf.ByteString; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.util.Callback; /** * Manage topic persistence metadata. */ public interface TopicPersistenceManager extends Closeable { /** * Read persistence info of a specified topic. * * @param topic * Topic Name * @param callback * Callback when read persistence info. * If no persistence info found, return null. * @param ctx * Context of the callback */ public void readTopicPersistenceInfo(ByteString topic, Callback> callback, Object ctx); /** * Update persistence info of a specified topic. * * @param topic * Topic name * @param ranges * Persistence info * @param version * Current version of persistence info. * If version is {@link Version.NEW}, create persistence info; * {@link PubSubException.TopicPersistenceInfoExistsException} is returned when * persistence info existed before. * Otherwise, the persitence info is updated only when * provided version equals to its current version. * {@link PubSubException.BadVersionException} is returned when version doesn't match, * {@link PubSubException.NoTopicPersistenceInfoException} is returned when no * persistence info found to update. * @param callback * Callback when persistence info updated. New version would be returned. * @param ctx * Context of the callback */ public void writeTopicPersistenceInfo(ByteString topic, LedgerRanges ranges, Version version, Callback callback, Object ctx); /** * Delete persistence info of a specified topic. * Currently used in test cases. * * @param topic * Topic name * @param version * Current version of persistence info * If version is {@link Version.ANY}, delete persistence info no matter its current version. * Otherwise, the persitence info is deleted only when * provided version equals to its current version. * @param callback * Callback return whether the deletion succeed. * {@link PubSubException.NoTopicPersistenceInfoException} is returned when no persistence. * {@link PubSubException.BadVersionException} is returned when version doesn't match. * info found to delete. * @param ctx * Context of the callback */ public void deleteTopicPersistenceInfo(ByteString topic, Version version, Callback callback, Object ctx); } ZkMetadataManagerFactory.java000066400000000000000000001222521244507361200356530ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/meta/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.meta; import java.io.IOException; import java.util.List; import java.util.Map; import java.util.Iterator; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.ZKUtil; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.data.Stat; import com.google.protobuf.ByteString; import com.google.protobuf.InvalidProtocolBufferException; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.meta.ZkVersion; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionState; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.topics.HubInfo; import org.apache.hedwig.util.Callback; import org.apache.hedwig.zookeeper.SafeAsyncZKCallback; import org.apache.hedwig.zookeeper.ZkUtils; /** * ZooKeeper-based Metadata Manager. */ public class ZkMetadataManagerFactory extends MetadataManagerFactory { protected final static Logger logger = LoggerFactory.getLogger(ZkMetadataManagerFactory.class); static final int CUR_VERSION = 1; ZooKeeper zk; ServerConfiguration cfg; @Override public int getCurrentVersion() { return CUR_VERSION; } @Override public MetadataManagerFactory initialize(ServerConfiguration cfg, ZooKeeper zk, int version) throws IOException { if (CUR_VERSION != version) { throw new IOException("Incompatible ZkMetadataManagerFactory version " + version + " found, expected version " + CUR_VERSION); } this.cfg = cfg; this.zk = zk; return this; } @Override public void shutdown() { // do nothing here, because zookeeper handle is passed from outside // we don't need to stop it. } @Override public Iterator getTopics() throws IOException { List topics; try { topics = zk.getChildren(cfg.getZkTopicsPrefix(new StringBuilder()).toString(), false); } catch (KeeperException ke) { throw new IOException("Failed to get topics list : ", ke); } catch (InterruptedException ie) { throw new IOException("Interrupted on getting topics list : ", ie); } final Iterator iter = topics.iterator(); return new Iterator() { @Override public boolean hasNext() { return iter.hasNext(); } @Override public ByteString next() { String t = iter.next(); return ByteString.copyFromUtf8(t); } @Override public void remove() { iter.remove(); } }; } @Override public TopicPersistenceManager newTopicPersistenceManager() { return new ZkTopicPersistenceManagerImpl(cfg, zk); } @Override public SubscriptionDataManager newSubscriptionDataManager() { return new ZkSubscriptionDataManagerImpl(cfg, zk); } @Override public TopicOwnershipManager newTopicOwnershipManager() { return new ZkTopicOwnershipManagerImpl(cfg, zk); } /** * ZooKeeper based topic persistence manager. */ static class ZkTopicPersistenceManagerImpl implements TopicPersistenceManager { ZooKeeper zk; ServerConfiguration cfg; ZkTopicPersistenceManagerImpl(ServerConfiguration conf, ZooKeeper zk) { this.cfg = conf; this.zk = zk; } @Override public void close() throws IOException { // do nothing in zookeeper based impl } /** * Get znode path to store persistence info of a topic. * * @param topic * Topic name * @return znode path to store persistence info. */ private String ledgersPath(ByteString topic) { return cfg.getZkTopicPath(new StringBuilder(), topic).append("/ledgers").toString(); } /** * Parse ledger ranges data and return it thru callback. * * @param topic * Topic name * @param data * Topic Ledger Ranges data * @param version * Version of the topic ledger ranges data * @param callback * Callback to return ledger ranges * @param ctx * Context of the callback */ private void parseAndReturnTopicLedgerRanges(ByteString topic, byte[] data, int version, Callback> callback, Object ctx) { try { Versioned ranges = new Versioned(LedgerRanges.parseFrom(data), new ZkVersion(version)); callback.operationFinished(ctx, ranges); return; } catch (InvalidProtocolBufferException e) { String msg = "Ledger ranges for topic:" + topic.toStringUtf8() + " could not be deserialized"; logger.error(msg, e); callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } } @Override public void readTopicPersistenceInfo(final ByteString topic, final Callback> callback, Object ctx) { // read topic ledgers node data final String zNodePath = ledgersPath(topic); zk.getData(zNodePath, false, new SafeAsyncZKCallback.DataCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, byte[] data, Stat stat) { if (rc == Code.OK.intValue()) { parseAndReturnTopicLedgerRanges(topic, data, stat.getVersion(), callback, ctx); return; } if (rc == Code.NONODE.intValue()) { // we don't create the znode until we first write it. callback.operationFinished(ctx, null); return; } // otherwise some other error KeeperException ke = ZkUtils.logErrorAndCreateZKException("Could not read ledgers node for topic: " + topic.toStringUtf8(), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(ke)); } }, ctx); } private void createTopicPersistenceInfo(final ByteString topic, LedgerRanges ranges, final Callback callback, Object ctx) { final String zNodePath = ledgersPath(topic); final byte[] data = ranges.toByteArray(); // create it ZkUtils.createFullPathOptimistic(zk, zNodePath, data, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT, new SafeAsyncZKCallback.StringCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, String name) { if (rc == Code.NODEEXISTS.intValue()) { callback.operationFailed(ctx, PubSubException.create(StatusCode.TOPIC_PERSISTENCE_INFO_EXISTS, "Persistence info of topic " + topic.toStringUtf8() + " existed.")); return; } if (rc != Code.OK.intValue()) { KeeperException ke = ZkUtils.logErrorAndCreateZKException( "Could not create ledgers node for topic: " + topic.toStringUtf8(), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(ke)); return; } // initial version is version 0 callback.operationFinished(ctx, new ZkVersion(0)); } }, ctx); return; } @Override public void writeTopicPersistenceInfo(final ByteString topic, LedgerRanges ranges, final Version version, final Callback callback, Object ctx) { if (Version.NEW == version) { createTopicPersistenceInfo(topic, ranges, callback, ctx); return; } final String zNodePath = ledgersPath(topic); final byte[] data = ranges.toByteArray(); if (!(version instanceof ZkVersion)) { callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException( "Invalid version provided to update persistence info for topic " + topic.toStringUtf8())); return; } int znodeVersion = ((ZkVersion)version).getZnodeVersion(); zk.setData(zNodePath, data, znodeVersion, new SafeAsyncZKCallback.StatCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, Stat stat) { if (rc == Code.NONODE.intValue()) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_TOPIC_PERSISTENCE_INFO, "No persistence info found for topic " + topic.toStringUtf8())); return; } else if (rc == Code.BadVersion) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to update persistence info of topic " + topic.toStringUtf8())); return; } else if (rc == Code.OK.intValue()) { callback.operationFinished(ctx, new ZkVersion(stat.getVersion())); return; } else { KeeperException ke = ZkUtils.logErrorAndCreateZKException( "Could not write ledgers node for topic: " + topic.toStringUtf8(), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(ke)); return; } } }, ctx); } @Override public void deleteTopicPersistenceInfo(final ByteString topic, final Version version, final Callback callback, Object ctx) { final String zNodePath = ledgersPath(topic); int znodeVersion = -1; if (Version.ANY != version) { if (!(version instanceof ZkVersion)) { callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException( "Invalid version provided to delete persistence info for topic " + topic.toStringUtf8())); return; } else { znodeVersion = ((ZkVersion)version).getZnodeVersion(); } } zk.delete(zNodePath, znodeVersion, new SafeAsyncZKCallback.VoidCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx) { if (rc == Code.OK.intValue()) { callback.operationFinished(ctx, null); return; } else if (rc == Code.NONODE.intValue()) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_TOPIC_PERSISTENCE_INFO, "No persistence info found for topic " + topic.toStringUtf8())); return; } else if (rc == Code.BadVersion) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to delete persistence info of topic " + topic.toStringUtf8())); return; } KeeperException e = ZkUtils.logErrorAndCreateZKException("Topic: " + topic.toStringUtf8() + " failed to delete persistence info @version " + version + " : ", path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); } }, ctx); } } /** * ZooKeeper based subscription data manager. */ static class ZkSubscriptionDataManagerImpl implements SubscriptionDataManager { ZooKeeper zk; ServerConfiguration cfg; ZkSubscriptionDataManagerImpl(ServerConfiguration conf, ZooKeeper zk) { this.cfg = conf; this.zk = zk; } @Override public void close() throws IOException { // do nothing in zookeeper based impl } /** * Get znode path to store subscription states. * * @param sb * String builder to store the znode path. * @param topic * Topic name. * * @return string builder to store znode path. */ private StringBuilder topicSubscribersPath(StringBuilder sb, ByteString topic) { return cfg.getZkTopicPath(sb, topic).append("/subscribers"); } /** * Get znode path to store subscription state for a specified subscriber. * * @param topic * Topic name. * @param subscriber * Subscriber id. * @return znode path to store subscription state. */ private String topicSubscriberPath(ByteString topic, ByteString subscriber) { return topicSubscribersPath(new StringBuilder(), topic).append("/").append(subscriber.toStringUtf8()) .toString(); } @Override public boolean isPartialUpdateSupported() { return false; } @Override public void createSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData data, final Callback callback, final Object ctx) { ZkUtils.createFullPathOptimistic(zk, topicSubscriberPath(topic, subscriberId), data.toByteArray(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT, new SafeAsyncZKCallback.StringCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, String name) { if (rc == Code.NODEEXISTS.intValue()) { callback.operationFailed(ctx, PubSubException.create(StatusCode.SUBSCRIPTION_STATE_EXISTS, "Subscription state for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ") existed.")); return; } else if (rc == Code.OK.intValue()) { if (logger.isDebugEnabled()) { logger.debug("Successfully recorded subscription for topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " data: " + SubscriptionStateUtils.toString(data)); } callback.operationFinished(ctx, new ZkVersion(0)); } else { KeeperException ke = ZkUtils.logErrorAndCreateZKException( "Could not record new subscription for topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8(), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(ke)); } } }, ctx); } @Override public void updateSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData data, final Version version, final Callback callback, final Object ctx) { throw new UnsupportedOperationException("ZooKeeper based metadata manager doesn't support partial update!"); } @Override public void replaceSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData data, final Version version, final Callback callback, final Object ctx) { int znodeVersion = -1; if (Version.NEW == version) { callback.operationFailed(ctx, new PubSubException.BadVersionException("Can not replace Version.New subscription data")); return; } else if (Version.ANY != version) { if (!(version instanceof ZkVersion)) { callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException( "Invalid version provided to replace subscription data for topic " + topic.toStringUtf8() + " subscribe id: " + subscriberId)); return; } else { znodeVersion = ((ZkVersion)version).getZnodeVersion(); } } zk.setData(topicSubscriberPath(topic, subscriberId), data.toByteArray(), znodeVersion, new SafeAsyncZKCallback.StatCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, Stat stat) { if (rc == Code.NONODE.intValue()) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_SUBSCRIPTION_STATE, "No subscription state found for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ").")); return; } else if (rc == Code.BadVersion) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to replace subscription data of topic " + topic.toStringUtf8() + " subscriberId " + subscriberId)); return; } else if (rc != Code.OK.intValue()) { KeeperException e = ZkUtils.logErrorAndCreateZKException("Topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " could not set subscription data: " + SubscriptionStateUtils.toString(data), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); } else { if (logger.isDebugEnabled()) { logger.debug("Successfully updated subscription for topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " data: " + SubscriptionStateUtils.toString(data)); } callback.operationFinished(ctx, new ZkVersion(stat.getVersion())); } } }, ctx); } @Override public void deleteSubscriptionData(final ByteString topic, final ByteString subscriberId, Version version, final Callback callback, Object ctx) { int znodeVersion = -1; if (Version.NEW == version) { callback.operationFailed(ctx, new PubSubException.BadVersionException("Can not delete Version.New subscription data")); return; } else if (Version.ANY != version) { if (!(version instanceof ZkVersion)) { callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException( "Invalid version provided to delete subscription data for topic " + topic.toStringUtf8() + " subscribe id: " + subscriberId)); return; } else { znodeVersion = ((ZkVersion)version).getZnodeVersion(); } } zk.delete(topicSubscriberPath(topic, subscriberId), znodeVersion, new SafeAsyncZKCallback.VoidCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx) { if (rc == Code.NONODE.intValue()) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_SUBSCRIPTION_STATE, "No subscription state found for (topic:" + topic.toStringUtf8() + ", subscriber:" + subscriberId.toStringUtf8() + ").")); return; } else if (rc == Code.BadVersion) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to delete subscription data of topic " + topic.toStringUtf8() + " subscriberId " + subscriberId)); return; } else if (rc == Code.OK.intValue()) { if (logger.isDebugEnabled()) { logger.debug("Successfully deleted subscription for topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8()); } callback.operationFinished(ctx, null); return; } KeeperException e = ZkUtils.logErrorAndCreateZKException("Topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " failed to delete subscription", path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); } }, ctx); } @Override public void readSubscriptionData(final ByteString topic, final ByteString subscriberId, final Callback> callback, final Object ctx) { zk.getData(topicSubscriberPath(topic, subscriberId), false, new SafeAsyncZKCallback.DataCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, byte[] data, Stat stat) { if (rc == Code.NONODE.intValue()) { callback.operationFinished(ctx, null); return; } if (rc != Code.OK.intValue()) { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Could not read subscription data for topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8(), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } Versioned subData; try { subData = new Versioned( SubscriptionStateUtils.parseSubscriptionData(data), new ZkVersion(stat.getVersion())); } catch (InvalidProtocolBufferException ex) { String msg = "Failed to deserialize subscription data for topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8(); logger.error(msg, ex); callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } if (logger.isDebugEnabled()) { logger.debug("Found subscription while acquiring topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " data: " + SubscriptionStateUtils.toString(subData.getValue())); } callback.operationFinished(ctx, subData); } }, ctx); } @Override public void readSubscriptions(final ByteString topic, final Callback>> cb, final Object ctx) { String topicSubscribersPath = topicSubscribersPath(new StringBuilder(), topic).toString(); zk.getChildren(topicSubscribersPath, false, new SafeAsyncZKCallback.ChildrenCallback() { @Override public void safeProcessResult(int rc, String path, final Object ctx, final List children) { if (rc != Code.OK.intValue() && rc != Code.NONODE.intValue()) { KeeperException e = ZkUtils.logErrorAndCreateZKException("Could not read subscribers for topic " + topic.toStringUtf8(), path, rc); cb.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } final Map> topicSubs = new ConcurrentHashMap>(); if (rc == Code.NONODE.intValue() || children.size() == 0) { if (logger.isDebugEnabled()) { logger.debug("No subscriptions found while acquiring topic: " + topic.toStringUtf8()); } cb.operationFinished(ctx, topicSubs); return; } final AtomicBoolean failed = new AtomicBoolean(); final AtomicInteger count = new AtomicInteger(); for (final String child : children) { final ByteString subscriberId = ByteString.copyFromUtf8(child); final String childPath = path + "/" + child; zk.getData(childPath, false, new SafeAsyncZKCallback.DataCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, byte[] data, Stat stat) { if (rc != Code.OK.intValue()) { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Could not read subscription data for topic: " + topic.toStringUtf8() + ", subscriberId: " + subscriberId.toStringUtf8(), path, rc); reportFailure(new PubSubException.ServiceDownException(e)); return; } if (failed.get()) { return; } Versioned subData; try { subData = new Versioned( SubscriptionStateUtils.parseSubscriptionData(data), new ZkVersion(stat.getVersion())); } catch (InvalidProtocolBufferException ex) { String msg = "Failed to deserialize subscription data for topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8(); logger.error(msg, ex); reportFailure(new PubSubException.UnexpectedConditionException(msg)); return; } if (logger.isDebugEnabled()) { logger.debug("Found subscription while acquiring topic: " + topic.toStringUtf8() + " subscriberId: " + child + "state: " + SubscriptionStateUtils.toString(subData.getValue())); } topicSubs.put(subscriberId, subData); if (count.incrementAndGet() == children.size()) { assert topicSubs.size() == count.get(); cb.operationFinished(ctx, topicSubs); } } private void reportFailure(PubSubException e) { if (failed.compareAndSet(false, true)) cb.operationFailed(ctx, e); } }, ctx); } } }, ctx); } } /** * ZooKeeper base topic ownership manager. */ static class ZkTopicOwnershipManagerImpl implements TopicOwnershipManager { ZooKeeper zk; ServerConfiguration cfg; ZkTopicOwnershipManagerImpl(ServerConfiguration conf, ZooKeeper zk) { this.cfg = conf; this.zk = zk; } @Override public void close() throws IOException { // do nothing in zookeeper based impl } /** * Return znode path to store topic owner. * * @param topic * Topic Name * @return znode path to store topic owner. */ String hubPath(ByteString topic) { return cfg.getZkTopicPath(new StringBuilder(), topic).append("/hub").toString(); } @Override public void readOwnerInfo(final ByteString topic, final Callback> callback, Object ctx) { String ownerPath = hubPath(topic); zk.getData(ownerPath, false, new SafeAsyncZKCallback.DataCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, byte[] data, Stat stat) { if (Code.NONODE.intValue() == rc) { callback.operationFinished(ctx, null); return; } if (Code.OK.intValue() != rc) { KeeperException e = ZkUtils.logErrorAndCreateZKException("Could not read ownership for topic: " + topic.toStringUtf8(), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } HubInfo owner = null; try { owner = HubInfo.parse(new String(data)); } catch (HubInfo.InvalidHubInfoException ihie) { logger.warn("Failed to parse hub info for topic " + topic.toStringUtf8() + " : ", ihie); } int version = stat.getVersion(); callback.operationFinished(ctx, new Versioned(owner, new ZkVersion(version))); return; } }, ctx); } @Override public void writeOwnerInfo(final ByteString topic, final HubInfo owner, final Version version, final Callback callback, Object ctx) { if (Version.NEW == version) { createOwnerInfo(topic, owner, callback, ctx); return; } if (!(version instanceof ZkVersion)) { callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException( "Invalid version provided to update owner info for topic " + topic.toStringUtf8())); return; } int znodeVersion = ((ZkVersion)version).getZnodeVersion(); zk.setData(hubPath(topic), owner.toString().getBytes(), znodeVersion, new SafeAsyncZKCallback.StatCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, Stat stat) { if (rc == Code.NONODE.intValue()) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_TOPIC_OWNER_INFO, "No owner info found for topic " + topic.toStringUtf8())); return; } else if (rc == Code.BadVersion) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to update owner info of topic " + topic.toStringUtf8())); return; } else if (Code.OK.intValue() == rc) { callback.operationFinished(ctx, new ZkVersion(stat.getVersion())); return; } else { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Failed to update ownership of topic " + topic.toStringUtf8() + " to " + owner, path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } } }, ctx); } protected void createOwnerInfo(final ByteString topic, final HubInfo owner, final Callback callback, Object ctx) { String ownerPath = hubPath(topic); ZkUtils.createFullPathOptimistic(zk, ownerPath, owner.toString().getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT, new SafeAsyncZKCallback.StringCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, String name) { if (Code.OK.intValue() == rc) { // assume the initial version is 0 callback.operationFinished(ctx, new ZkVersion(0)); return; } else if (Code.NODEEXISTS.intValue() == rc) { // node existed callback.operationFailed(ctx, PubSubException.create(StatusCode.TOPIC_OWNER_INFO_EXISTS, "Owner info of topic " + topic.toStringUtf8() + " existed.")); return; } else { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Failed to create znode for ownership of topic: " + topic.toStringUtf8(), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } } }, ctx); } @Override public void deleteOwnerInfo(final ByteString topic, final Version version, final Callback callback, Object ctx) { int znodeVersion = -1; if (Version.ANY != version) { if (!(version instanceof ZkVersion)) { callback.operationFailed(ctx, new PubSubException.UnexpectedConditionException( "Invalid version provided to delete owner info for topic " + topic.toStringUtf8())); return; } else { znodeVersion = ((ZkVersion)version).getZnodeVersion(); } } zk.delete(hubPath(topic), znodeVersion, new SafeAsyncZKCallback.VoidCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx) { if (Code.OK.intValue() == rc) { if (logger.isDebugEnabled()) { logger.debug("Successfully deleted owner info for topic " + topic.toStringUtf8() + "."); } callback.operationFinished(ctx, null); return; } else if (Code.NONODE.intValue() == rc) { // no node callback.operationFailed(ctx, PubSubException.create(StatusCode.NO_TOPIC_OWNER_INFO, "No owner info found for topic " + topic.toStringUtf8())); return; } else if (Code.BadVersion == rc) { // bad version callback.operationFailed(ctx, PubSubException.create(StatusCode.BAD_VERSION, "Bad version provided to delete owner info of topic " + topic.toStringUtf8())); return; } else { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Failed to delete owner info for topic " + topic.toStringUtf8(), path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); } } }, ctx); } } @Override public void format(ServerConfiguration cfg, ZooKeeper zk) throws IOException { try { ZKUtil.deleteRecursive(zk, cfg.getZkTopicsPrefix(new StringBuilder()).toString()); } catch (KeeperException.NoNodeException e) { logger.debug("Hedwig root node doesn't exist in zookeeper to delete"); } catch (KeeperException ke) { throw new IOException(ke); } catch (InterruptedException ie) { throw new IOException(ie); } } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/netty/000077500000000000000000000000001244507361200304305ustar00rootroot00000000000000PubSubServer.java000066400000000000000000000534551244507361200336170ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import java.io.File; import java.io.IOException; import java.net.InetSocketAddress; import java.net.MalformedURLException; import java.util.Collections; import java.util.HashMap; import java.util.Map; import java.util.concurrent.CountDownLatch; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.SynchronousQueue; import java.util.concurrent.TimeUnit; import com.google.common.annotations.VisibleForTesting; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.BKException; import org.apache.commons.configuration.ConfigurationException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.jboss.netty.bootstrap.ServerBootstrap; import org.jboss.netty.channel.group.ChannelGroup; import org.jboss.netty.channel.group.DefaultChannelGroup; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.jboss.netty.channel.socket.ServerSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory; import org.jboss.netty.logging.InternalLoggerFactory; import org.jboss.netty.logging.Log4JLoggerFactory; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.common.TerminateJVMExceptionHandler; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.delivery.FIFODeliveryManager; import org.apache.hedwig.server.handlers.CloseSubscriptionHandler; import org.apache.hedwig.server.handlers.ConsumeHandler; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.handlers.NettyHandlerBean; import org.apache.hedwig.server.handlers.PublishHandler; import org.apache.hedwig.server.handlers.SubscribeHandler; import org.apache.hedwig.server.handlers.SubscriptionChannelManager; import org.apache.hedwig.server.handlers.SubscriptionChannelManager.SubChannelDisconnectedListener; import org.apache.hedwig.server.handlers.UnsubscribeHandler; import org.apache.hedwig.server.jmx.HedwigMBeanRegistry; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.meta.ZkMetadataManagerFactory; import org.apache.hedwig.server.persistence.BookkeeperPersistenceManager; import org.apache.hedwig.server.persistence.LocalDBPersistenceManager; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.persistence.PersistenceManagerWithRangeScan; import org.apache.hedwig.server.persistence.ReadAheadCache; import org.apache.hedwig.server.regions.HedwigHubClientFactory; import org.apache.hedwig.server.regions.RegionManager; import org.apache.hedwig.server.ssl.SslServerContextFactory; import org.apache.hedwig.server.subscriptions.InMemorySubscriptionManager; import org.apache.hedwig.server.subscriptions.SubscriptionManager; import org.apache.hedwig.server.subscriptions.MMSubscriptionManager; import org.apache.hedwig.server.topics.MMTopicManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.server.topics.TrivialOwnAllTopicManager; import org.apache.hedwig.server.topics.ZkTopicManager; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Either; import org.apache.hedwig.zookeeper.SafeAsyncCallback; public class PubSubServer { static Logger logger = LoggerFactory.getLogger(PubSubServer.class); private static final String JMXNAME_PREFIX = "PubSubServer_"; // Netty related variables ServerSocketChannelFactory serverChannelFactory; ClientSocketChannelFactory clientChannelFactory; ServerConfiguration conf; org.apache.hedwig.client.conf.ClientConfiguration clientConfiguration; ChannelGroup allChannels; // Manager components that make up the PubSubServer PersistenceManager pm; DeliveryManager dm; TopicManager tm; SubscriptionManager sm; RegionManager rm; // Metadata Manager Factory MetadataManagerFactory mm; ZooKeeper zk; // null if we are in standalone mode BookKeeper bk; // null if we are in standalone mode // we use this to prevent long stack chains from building up in callbacks ScheduledExecutorService scheduler; // JMX Beans NettyHandlerBean jmxNettyBean; PubSubServerBean jmxServerBean; final ThreadGroup tg; protected PersistenceManager instantiatePersistenceManager(TopicManager topicMgr) throws IOException, InterruptedException { PersistenceManagerWithRangeScan underlyingPM; if (conf.isStandalone()) { underlyingPM = LocalDBPersistenceManager.instance(); } else { try { ClientConfiguration bkConf = new ClientConfiguration(); bkConf.addConfiguration(conf.getConf()); bk = new BookKeeper(bkConf, zk, clientChannelFactory); } catch (KeeperException e) { logger.error("Could not instantiate bookkeeper client", e); throw new IOException(e); } underlyingPM = new BookkeeperPersistenceManager(bk, mm, topicMgr, conf, scheduler); } PersistenceManager pm = underlyingPM; if (conf.getReadAheadEnabled()) { pm = new ReadAheadCache(underlyingPM, conf).start(); } return pm; } protected SubscriptionManager instantiateSubscriptionManager(TopicManager tm, PersistenceManager pm, DeliveryManager dm) { if (conf.isStandalone()) { return new InMemorySubscriptionManager(conf, tm, pm, dm, scheduler); } else { return new MMSubscriptionManager(conf, mm, tm, pm, dm, scheduler); } } protected RegionManager instantiateRegionManager(PersistenceManager pm, ScheduledExecutorService scheduler) { return new RegionManager(pm, conf, zk, scheduler, new HedwigHubClientFactory(conf, clientConfiguration, clientChannelFactory)); } protected void instantiateZookeeperClient() throws Exception { if (!conf.isStandalone()) { final CountDownLatch signalZkReady = new CountDownLatch(1); zk = new ZooKeeper(conf.getZkHost(), conf.getZkTimeout(), new Watcher() { @Override public void process(WatchedEvent event) { if(Event.KeeperState.SyncConnected.equals(event.getState())) { signalZkReady.countDown(); } } }); // wait until connection is effective if (!signalZkReady.await(conf.getZkTimeout()*2, TimeUnit.MILLISECONDS)) { logger.error("Could not establish connection with ZooKeeper after zk_timeout*2 = " + conf.getZkTimeout()*2 + " ms. (Default value for zk_timeout is 2000)."); throw new Exception("Could not establish connection with ZooKeeper after zk_timeout*2 = " + conf.getZkTimeout()*2 + " ms. (Default value for zk_timeout is 2000)."); } } } protected void instantiateMetadataManagerFactory() throws Exception { if (conf.isStandalone()) { return; } mm = MetadataManagerFactory.newMetadataManagerFactory(conf, zk); } protected TopicManager instantiateTopicManager() throws IOException { TopicManager tm; if (conf.isStandalone()) { tm = new TrivialOwnAllTopicManager(conf, scheduler); } else { try { if (conf.isMetadataManagerBasedTopicManagerEnabled()) { tm = new MMTopicManager(conf, zk, mm, scheduler); } else { if (!(mm instanceof ZkMetadataManagerFactory)) { throw new IOException("Uses " + mm.getClass().getName() + " to store hedwig metadata, " + "but uses zookeeper ephemeral znodes to store topic ownership. " + "Check your configuration as this could lead to scalability issues."); } tm = new ZkTopicManager(zk, conf, scheduler); } } catch (PubSubException e) { logger.error("Could not instantiate TopicOwnershipManager based topic manager", e); throw new IOException(e); } } return tm; } protected Map initializeNettyHandlers( TopicManager tm, DeliveryManager dm, PersistenceManager pm, SubscriptionManager sm, SubscriptionChannelManager subChannelMgr) { Map handlers = new HashMap(); handlers.put(OperationType.PUBLISH, new PublishHandler(tm, pm, conf)); handlers.put(OperationType.SUBSCRIBE, new SubscribeHandler(conf, tm, dm, pm, sm, subChannelMgr)); handlers.put(OperationType.UNSUBSCRIBE, new UnsubscribeHandler(conf, tm, sm, dm, subChannelMgr)); handlers.put(OperationType.CONSUME, new ConsumeHandler(tm, sm, conf)); handlers.put(OperationType.CLOSESUBSCRIPTION, new CloseSubscriptionHandler(conf, tm, sm, dm, subChannelMgr)); handlers = Collections.unmodifiableMap(handlers); return handlers; } protected void initializeNetty(SslServerContextFactory sslFactory, Map handlers, SubscriptionChannelManager subChannelMgr) { boolean isSSLEnabled = (sslFactory != null) ? true : false; InternalLoggerFactory.setDefaultFactory(new Log4JLoggerFactory()); ServerBootstrap bootstrap = new ServerBootstrap(serverChannelFactory); UmbrellaHandler umbrellaHandler = new UmbrellaHandler(allChannels, handlers, subChannelMgr, isSSLEnabled); PubSubServerPipelineFactory pipeline = new PubSubServerPipelineFactory(umbrellaHandler, sslFactory, conf.getMaximumMessageSize()); bootstrap.setPipelineFactory(pipeline); bootstrap.setOption("child.tcpNoDelay", true); bootstrap.setOption("child.keepAlive", true); bootstrap.setOption("reuseAddress", true); // Bind and start to accept incoming connections. allChannels.add(bootstrap.bind(isSSLEnabled ? new InetSocketAddress(conf.getSSLServerPort()) : new InetSocketAddress(conf.getServerPort()))); logger.info("Going into receive loop"); } public void shutdown() { // TODO: tell bk to close logs // Stop topic manager first since it is core of Hub server tm.stop(); // Stop the RegionManager. rm.stop(); // Stop the DeliveryManager and ReadAheadCache threads (if // applicable). dm.stop(); pm.stop(); // Stop the SubscriptionManager if needed. sm.stop(); // Shutdown metadata manager if needed if (null != mm) { try { mm.shutdown(); } catch (IOException ie) { logger.error("Error while shutdown metadata manager factory!", ie); } } // Shutdown the ZooKeeper and BookKeeper clients only if we are // not in stand-alone mode. try { if (bk != null) bk.close(); if (zk != null) zk.close(); } catch (InterruptedException e) { logger.error("Error while closing ZooKeeper client : ", e); } catch (BKException bke) { logger.error("Error while closing BookKeeper client : ", bke); } // Close and release the Netty channels and resources allChannels.close().awaitUninterruptibly(); serverChannelFactory.releaseExternalResources(); clientChannelFactory.releaseExternalResources(); scheduler.shutdown(); // unregister jmx unregisterJMX(); } protected void registerJMX(SubscriptionChannelManager subChannelMgr) { try { String jmxName = JMXNAME_PREFIX + conf.getServerPort() + "_" + conf.getSSLServerPort(); jmxServerBean = new PubSubServerBean(jmxName); HedwigMBeanRegistry.getInstance().register(jmxServerBean, null); try { jmxNettyBean = new NettyHandlerBean(subChannelMgr); HedwigMBeanRegistry.getInstance().register(jmxNettyBean, jmxServerBean); } catch (Exception e) { logger.warn("Failed to register with JMX", e); jmxNettyBean = null; } } catch (Exception e) { logger.warn("Failed to register with JMX", e); jmxServerBean = null; } if (pm instanceof ReadAheadCache) { ((ReadAheadCache)pm).registerJMX(jmxServerBean); } } protected void unregisterJMX() { if (pm != null && pm instanceof ReadAheadCache) { ((ReadAheadCache)pm).unregisterJMX(); } try { if (jmxNettyBean != null) { HedwigMBeanRegistry.getInstance().unregister(jmxNettyBean); } } catch (Exception e) { logger.warn("Failed to unregister with JMX", e); } try { if (jmxServerBean != null) { HedwigMBeanRegistry.getInstance().unregister(jmxServerBean); } } catch (Exception e) { logger.warn("Failed to unregister with JMX", e); } jmxNettyBean = null; jmxServerBean = null; } /** * Starts the hedwig server on the given port * * @param port * @throws ConfigurationException * if there is something wrong with the given configuration * @throws IOException * @throws InterruptedException * @throws ConfigurationException */ public PubSubServer(final ServerConfiguration serverConfiguration, final org.apache.hedwig.client.conf.ClientConfiguration clientConfiguration, final Thread.UncaughtExceptionHandler exceptionHandler) throws ConfigurationException { // First validate the serverConfiguration this.conf = serverConfiguration; serverConfiguration.validate(); // Validate the client configuration this.clientConfiguration = clientConfiguration; clientConfiguration.validate(); // We need a custom thread group, so that we can override the uncaught // exception method tg = new ThreadGroup("hedwig") { @Override public void uncaughtException(Thread t, Throwable e) { exceptionHandler.uncaughtException(t, e); } }; // ZooKeeper threads register their own handler. But if some work that // we do in ZK threads throws an exception, we want our handler to be // called, not theirs. SafeAsyncCallback.setUncaughtExceptionHandler(exceptionHandler); } public void start() throws Exception { final SynchronousQueue> queue = new SynchronousQueue>(); new Thread(tg, new Runnable() { @Override public void run() { try { // Since zk is needed by almost everyone,try to see if we // need that first scheduler = Executors.newSingleThreadScheduledExecutor(); serverChannelFactory = new NioServerSocketChannelFactory(Executors.newCachedThreadPool(), Executors .newCachedThreadPool()); clientChannelFactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors .newCachedThreadPool()); instantiateZookeeperClient(); instantiateMetadataManagerFactory(); tm = instantiateTopicManager(); pm = instantiatePersistenceManager(tm); dm = new FIFODeliveryManager(pm, conf); dm.start(); sm = instantiateSubscriptionManager(tm, pm, dm); rm = instantiateRegionManager(pm, scheduler); sm.addListener(rm); allChannels = new DefaultChannelGroup("hedwig"); // Initialize the Netty Handlers (used by the // UmbrellaHandler) once so they can be shared by // both the SSL and non-SSL channels. SubscriptionChannelManager subChannelMgr = new SubscriptionChannelManager(); subChannelMgr.addSubChannelDisconnectedListener((SubChannelDisconnectedListener) dm); Map handlers = initializeNettyHandlers(tm, dm, pm, sm, subChannelMgr); // Initialize Netty for the regular non-SSL channels initializeNetty(null, handlers, subChannelMgr); if (conf.isSSLEnabled()) { initializeNetty(new SslServerContextFactory(conf), handlers, subChannelMgr); } // register jmx registerJMX(subChannelMgr); } catch (Exception e) { ConcurrencyUtils.put(queue, Either.right(e)); return; } ConcurrencyUtils.put(queue, Either.of(new Object(), (Exception) null)); } }).start(); Either either = ConcurrencyUtils.take(queue); if (either.left() == null) { throw either.right(); } } public PubSubServer(ServerConfiguration serverConfiguration, org.apache.hedwig.client.conf.ClientConfiguration clientConfiguration) throws Exception { this(serverConfiguration, clientConfiguration, new TerminateJVMExceptionHandler()); } public PubSubServer(ServerConfiguration serverConfiguration) throws Exception { this(serverConfiguration, new org.apache.hedwig.client.conf.ClientConfiguration()); } @VisibleForTesting public DeliveryManager getDeliveryManager() { return dm; } /** * * @param msg * @param rc * : code to exit with */ public static void errorMsgAndExit(String msg, Throwable t, int rc) { logger.error(msg, t); System.err.println(msg); System.exit(rc); } public final static int RC_INVALID_CONF_FILE = 1; public final static int RC_MISCONFIGURED = 2; public final static int RC_OTHER = 3; /** * @param args */ public static void main(String[] args) { logger.info("Attempting to start Hedwig"); ServerConfiguration serverConfiguration = new ServerConfiguration(); // The client configuration for the hedwig client in the region manager. org.apache.hedwig.client.conf.ClientConfiguration regionMgrClientConfiguration = new org.apache.hedwig.client.conf.ClientConfiguration(); if (args.length > 0) { String confFile = args[0]; try { serverConfiguration.loadConf(new File(confFile).toURI().toURL()); } catch (MalformedURLException e) { String msg = "Could not open server configuration file: " + confFile; errorMsgAndExit(msg, e, RC_INVALID_CONF_FILE); } catch (ConfigurationException e) { String msg = "Malformed server configuration file: " + confFile; errorMsgAndExit(msg, e, RC_MISCONFIGURED); } logger.info("Using configuration file " + confFile); } if (args.length > 1) { // args[1] is the client configuration file. String confFile = args[1]; try { regionMgrClientConfiguration.loadConf(new File(confFile).toURI().toURL()); } catch (MalformedURLException e) { String msg = "Could not open client configuration file: " + confFile; errorMsgAndExit(msg, e, RC_INVALID_CONF_FILE); } catch (ConfigurationException e) { String msg = "Malformed client configuration file: " + confFile; errorMsgAndExit(msg, e, RC_MISCONFIGURED); } } try { new PubSubServer(serverConfiguration, regionMgrClientConfiguration).start(); } catch (Throwable t) { errorMsgAndExit("Error during startup", t, RC_OTHER); } } } PubSubServerBean.java000066400000000000000000000045531244507361200344000ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import org.apache.hedwig.server.jmx.HedwigMBeanInfo; import org.apache.hedwig.server.netty.ServerStats.OpStatData; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; /** * PubSub Server Bean */ public class PubSubServerBean implements PubSubServerMXBean, HedwigMBeanInfo { private final String name; public PubSubServerBean(String jmxName) { this.name = jmxName; } @Override public String getName() { return name; } @Override public boolean isHidden() { return false; } @Override public OpStatData getPubStats() { return ServerStats.getInstance().getOpStats(OperationType.PUBLISH).toOpStatData(); } @Override public OpStatData getSubStats() { return ServerStats.getInstance().getOpStats(OperationType.SUBSCRIBE).toOpStatData(); } @Override public OpStatData getUnsubStats() { return ServerStats.getInstance().getOpStats(OperationType.UNSUBSCRIBE).toOpStatData(); } @Override public OpStatData getConsumeStats() { return ServerStats.getInstance().getOpStats(OperationType.CONSUME).toOpStatData(); } @Override public long getNumRequestsReceived() { return ServerStats.getInstance().getNumRequestsReceived(); } @Override public long getNumRequestsRedirect() { return ServerStats.getInstance().getNumRequestsRedirect(); } @Override public long getNumMessagesDelivered() { return ServerStats.getInstance().getNumMessagesDelivered(); } } PubSubServerMXBean.java000066400000000000000000000031411244507361200346350ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import org.apache.hedwig.server.netty.ServerStats.OpStatData; /** * PubSub Server MBean */ public interface PubSubServerMXBean { /** * @return publish stats */ public OpStatData getPubStats(); /** * @return subscription stats */ public OpStatData getSubStats(); /** * @return unsub stats */ public OpStatData getUnsubStats(); /** * @return consume stats */ public OpStatData getConsumeStats(); /** * @return number of requests received */ public long getNumRequestsReceived(); /** * @return number of requests redirect */ public long getNumRequestsRedirect(); /** * @return number of messages delivered */ public long getNumMessagesDelivered(); } PubSubServerPipelineFactory.java000066400000000000000000000061021244507361200366200ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import org.jboss.netty.channel.ChannelPipeline; import org.jboss.netty.channel.ChannelPipelineFactory; import org.jboss.netty.channel.Channels; import org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder; import org.jboss.netty.handler.codec.frame.LengthFieldPrepender; import org.jboss.netty.handler.codec.protobuf.ProtobufDecoder; import org.jboss.netty.handler.codec.protobuf.ProtobufEncoder; import org.jboss.netty.handler.ssl.SslHandler; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.server.ssl.SslServerContextFactory; public class PubSubServerPipelineFactory implements ChannelPipelineFactory { // TODO: make these conf settings final static int MAX_WORKER_THREADS = 32; final static int MAX_CHANNEL_MEMORY_SIZE = 10 * 1024 * 1024; final static int MAX_TOTAL_MEMORY_SIZE = 100 * 1024 * 1024; private UmbrellaHandler uh; private SslServerContextFactory sslFactory; private int maxMessageSize; /** * * @param uh * @param sslFactory * may be null if ssl is disabled * @param cfg */ public PubSubServerPipelineFactory(UmbrellaHandler uh, SslServerContextFactory sslFactory, int maxMessageSize) { this.uh = uh; this.sslFactory = sslFactory; this.maxMessageSize = maxMessageSize; } public ChannelPipeline getPipeline() throws Exception { ChannelPipeline pipeline = Channels.pipeline(); if (sslFactory != null) { pipeline.addLast("ssl", new SslHandler(sslFactory.getEngine())); } pipeline.addLast("lengthbaseddecoder", new LengthFieldBasedFrameDecoder(maxMessageSize, 0, 4, 0, 4)); pipeline.addLast("lengthprepender", new LengthFieldPrepender(4)); pipeline.addLast("protobufdecoder", new ProtobufDecoder(PubSubProtocol.PubSubRequest.getDefaultInstance())); pipeline.addLast("protobufencoder", new ProtobufEncoder()); // pipeline.addLast("executor", new ExecutionHandler( // new OrderedMemoryAwareThreadPoolExecutor(MAX_WORKER_THREADS, // MAX_CHANNEL_MEMORY_SIZE, MAX_TOTAL_MEMORY_SIZE))); // // Dependency injection. pipeline.addLast("umbrellahandler", uh); return pipeline; } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/netty/ServerStats.java000066400000000000000000000137701244507361200335700ustar00rootroot00000000000000/* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import java.util.HashMap; import java.util.Map; import java.beans.ConstructorProperties; import java.util.concurrent.atomic.AtomicLong; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Server Stats */ public class ServerStats { private static final Logger LOG = LoggerFactory.getLogger(ServerStats.class); static ServerStats instance = new ServerStats(); /** * A read view of stats, also used in CompositeViewData to expose to JMX */ public static class OpStatData { private final long maxLatency, minLatency; private final double avgLatency; private final long numSuccessOps, numFailedOps; private final String latencyHist; @ConstructorProperties({"maxLatency", "minLatency", "avgLatency", "numSuccessOps", "numFailedOps", "latencyHist"}) public OpStatData(long maxLatency, long minLatency, double avgLatency, long numSuccessOps, long numFailedOps, String latencyHist) { this.maxLatency = maxLatency; this.minLatency = minLatency == Long.MAX_VALUE ? 0 : minLatency; this.avgLatency = avgLatency; this.numSuccessOps = numSuccessOps; this.numFailedOps = numFailedOps; this.latencyHist = latencyHist; } public long getMaxLatency() { return maxLatency; } public long getMinLatency() { return minLatency; } public double getAvgLatency() { return avgLatency; } public long getNumSuccessOps() { return numSuccessOps; } public long getNumFailedOps() { return numFailedOps; } public String getLatencyHist() { return latencyHist; } } /** * Operation Statistics */ public static class OpStats { static final int NUM_BUCKETS = 3*9 + 2; long maxLatency = 0; long minLatency = Long.MAX_VALUE; double totalLatency = 0.0f; long numSuccessOps = 0; long numFailedOps = 0; long[] latencyBuckets = new long[NUM_BUCKETS]; OpStats() {} /** * Increment number of failed operations */ synchronized public void incrementFailedOps() { ++numFailedOps; } /** * Update Latency */ synchronized public void updateLatency(long latency) { if (latency < 0) { // less than 0ms . Ideally this should not happen. // We have seen this latency negative in some cases due to the // behaviors of JVM. Ignoring the statistics updation for such // cases. LOG.warn("Latency time coming negative"); return; } totalLatency += latency; ++numSuccessOps; if (latency < minLatency) { minLatency = latency; } if (latency > maxLatency) { maxLatency = latency; } int bucket; if (latency <= 100) { // less than 100ms bucket = (int)(latency / 10); } else if (latency <= 1000) { // 100ms ~ 1000ms bucket = 1 * 9 + (int)(latency / 100); } else if (latency <= 10000) { // 1s ~ 10s bucket = 2 * 9 + (int)(latency / 1000); } else { // more than 10s bucket = 3 * 9 + 1; } ++latencyBuckets[bucket]; } synchronized public OpStatData toOpStatData() { double avgLatency = numSuccessOps > 0 ? totalLatency / numSuccessOps : 0.0f; StringBuilder sb = new StringBuilder(); for (int i=0; i(); for (OperationType type : OperationType.values()) { stats.put(type, new OpStats()); } } Map stats; AtomicLong numRequestsReceived = new AtomicLong(0); AtomicLong numRequestsRedirect = new AtomicLong(0); AtomicLong numMessagesDelivered = new AtomicLong(0); /** * Stats of operations * * @param type * Operation Type * @return op stats */ public OpStats getOpStats(OperationType type) { return stats.get(type); } public void incrementRequestsReceived() { numRequestsReceived.incrementAndGet(); } public void incrementRequestsRedirect() { numRequestsRedirect.incrementAndGet(); } public void incrementMessagesDelivered() { numMessagesDelivered.incrementAndGet(); } public long getNumRequestsReceived() { return numRequestsReceived.get(); } public long getNumRequestsRedirect() { return numRequestsRedirect.get(); } public long getNumMessagesDelivered() { return numMessagesDelivered.get(); } } UmbrellaHandler.java000066400000000000000000000151221244507361200342560ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import java.io.IOException; import java.util.Map; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import org.jboss.netty.channel.ChannelHandlerContext; import org.jboss.netty.channel.ChannelPipelineCoverage; import org.jboss.netty.channel.ChannelStateEvent; import org.jboss.netty.channel.ExceptionEvent; import org.jboss.netty.channel.MessageEvent; import org.jboss.netty.channel.SimpleChannelHandler; import org.jboss.netty.channel.group.ChannelGroup; import org.jboss.netty.handler.codec.frame.CorruptedFrameException; import org.jboss.netty.handler.codec.frame.TooLongFrameException; import org.jboss.netty.handler.ssl.SslHandler; import org.apache.hedwig.exceptions.PubSubException.MalformedRequestException; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.handlers.ChannelDisconnectListener; import org.apache.hedwig.server.handlers.Handler; @ChannelPipelineCoverage("all") public class UmbrellaHandler extends SimpleChannelHandler { static Logger logger = LoggerFactory.getLogger(UmbrellaHandler.class); private final Map handlers; private final ChannelGroup allChannels; private final ChannelDisconnectListener channelDisconnectListener; private final boolean isSSLEnabled; public UmbrellaHandler(ChannelGroup allChannels, Map handlers, ChannelDisconnectListener channelDisconnectListener, boolean isSSLEnabled) { this.allChannels = allChannels; this.isSSLEnabled = isSSLEnabled; this.handlers = handlers; this.channelDisconnectListener = channelDisconnectListener; } @Override public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception { Throwable throwable = e.getCause(); // Add here if there are more exceptions we need to be able to tolerate. // 1. IOException may be thrown when a channel is forcefully closed by // the other end, or by the ProtobufDecoder when an invalid protobuf is // received // 2. TooLongFrameException is thrown by the LengthBasedDecoder if it // receives a packet that is too big // 3. CorruptedFramException is thrown by the LengthBasedDecoder when // the length is negative etc. if (throwable instanceof IOException || throwable instanceof TooLongFrameException || throwable instanceof CorruptedFrameException) { e.getChannel().close(); logger.debug("Uncaught exception", throwable); } else { // call our uncaught exception handler, which might decide to // shutdown the system Thread thread = Thread.currentThread(); thread.getUncaughtExceptionHandler().uncaughtException(thread, throwable); } } @Override public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception { // If SSL is NOT enabled, then we can add this channel to the // ChannelGroup. Otherwise, that is done when the channel is connected // and the SSL handshake has completed successfully. if (!isSSLEnabled) { allChannels.add(ctx.getChannel()); } } @Override public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception { if (isSSLEnabled) { ctx.getPipeline().get(SslHandler.class).handshake(e.getChannel()).addListener(new ChannelFutureListener() { public void operationComplete(ChannelFuture future) throws Exception { if (future.isSuccess()) { logger.debug("SSL handshake has completed successfully!"); allChannels.add(future.getChannel()); } else { future.getChannel().close(); } } }); } } @Override public void channelDisconnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception { Channel channel = ctx.getChannel(); // subscribe handler needs to know about channel disconnects channelDisconnectListener.channelDisconnected(channel); channel.close(); } public static void sendErrorResponseToMalformedRequest(Channel channel, long txnId, String msg) { logger.debug("Malformed request from {}, msg = {}", channel.getRemoteAddress(), msg); MalformedRequestException mre = new MalformedRequestException(msg); PubSubResponse response = PubSubResponseUtils.getResponseForException(mre, txnId); channel.write(response); } @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception { if (!(e.getMessage() instanceof PubSubProtocol.PubSubRequest)) { ctx.sendUpstream(e); return; } PubSubProtocol.PubSubRequest request = (PubSubProtocol.PubSubRequest) e.getMessage(); Handler handler = handlers.get(request.getType()); Channel channel = ctx.getChannel(); long txnId = request.getTxnId(); if (handler == null) { sendErrorResponseToMalformedRequest(channel, txnId, "Request type " + request.getType().getNumber() + " unknown"); return; } handler.handleRequest(request, channel); ServerStats.getInstance().incrementRequestsReceived(); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/000077500000000000000000000000001244507361200316115ustar00rootroot00000000000000BookkeeperPersistenceManager.java000066400000000000000000001562711244507361200401770ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.io.IOException; import java.util.Enumeration; import java.util.Iterator; import java.util.HashSet; import java.util.LinkedList; import java.util.List; import java.util.Map; import java.util.Set; import java.util.TreeMap; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.bookkeeper.client.AsyncCallback.CloseCallback; import org.apache.bookkeeper.client.AsyncCallback.DeleteCallback; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; import org.apache.bookkeeper.client.BookKeeper.DigestType; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import com.google.protobuf.InvalidProtocolBufferException; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ServerNotResponsibleForTopicException; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRange; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.common.TopicOpQueuer; import org.apache.hedwig.server.common.UnexpectedError; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.meta.TopicPersistenceManager; import org.apache.hedwig.server.persistence.ScanCallback.ReasonForFinish; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.server.topics.TopicOwnershipChangeListener; import org.apache.hedwig.util.Callback; import org.apache.hedwig.zookeeper.SafeAsynBKCallback; import static org.apache.hedwig.util.VarArgs.va; /** * This persistence manager uses zookeeper and bookkeeper to store messages. * * Information about topics are stored in zookeeper with a znode named after the * topic that contains an ASCII encoded list with records of the following form: * *
 * startSeqId(included)\tledgerId\n
 * 
* */ public class BookkeeperPersistenceManager implements PersistenceManagerWithRangeScan, TopicOwnershipChangeListener { static Logger logger = LoggerFactory.getLogger(BookkeeperPersistenceManager.class); static byte[] passwd = "sillysecret".getBytes(); private BookKeeper bk; private TopicPersistenceManager tpManager; private ServerConfiguration cfg; private TopicManager tm; private static final long START_SEQ_ID = 1L; // max number of entries allowed in a ledger private static final long UNLIMITED_ENTRIES = 0L; private final long maxEntriesPerLedger; static class InMemoryLedgerRange { LedgerRange range; LedgerHandle handle; public InMemoryLedgerRange(LedgerRange range, LedgerHandle handle) { this.range = range; this.handle = handle; } public InMemoryLedgerRange(LedgerRange range) { this(range, null); } public long getStartSeqIdIncluded() { assert range.hasStartSeqIdIncluded(); return range.getStartSeqIdIncluded(); } } static class TopicInfo { /** * stores the last message-seq-id vector that has been pushed to BK for * persistence (but not necessarily acked yet by BK) * */ MessageSeqId lastSeqIdPushed; /** * stores the last message-id that has been acked by BK. This number is * basically used for limiting scans to not read past what has been * persisted by BK */ long lastEntryIdAckedInCurrentLedger = -1; // because BK ledgers starts // at 0 /** * stores a sorted structure of the ledgers for a topic, mapping from * the endSeqIdIncluded to the ledger info. This structure does not * include the current ledger */ TreeMap ledgerRanges = new TreeMap(); Version ledgerRangesVersion = Version.NEW; /** * This is the handle of the current ledger that is being used to write * messages */ InMemoryLedgerRange currentLedgerRange; /** * Flag to release topic when encountering unrecoverable exceptions */ AtomicBoolean doRelease = new AtomicBoolean(false); /** * Flag indicats the topic is changing ledger */ AtomicBoolean doChangeLedger = new AtomicBoolean(false); /** * Last seq id to change ledger. */ long lastSeqIdBeforeLedgerChange = -1; /** * List to buffer all persist requests during changing ledger. */ LinkedList deferredRequests = null; final static int UNLIMITED = 0; int messageBound = UNLIMITED; } Map topicInfos = new ConcurrentHashMap(); TopicOpQueuer queuer; /** * Instantiates a BookKeeperPersistence manager. * * @param bk * a reference to bookkeeper to use. * @param metaManagerFactory * a metadata manager factory handle to use. * @param tm * a reference to topic manager. * @param cfg * Server configuration object * @param executor * A executor */ public BookkeeperPersistenceManager(BookKeeper bk, MetadataManagerFactory metaManagerFactory, TopicManager tm, ServerConfiguration cfg, ScheduledExecutorService executor) { this.bk = bk; this.tpManager = metaManagerFactory.newTopicPersistenceManager(); this.cfg = cfg; this.tm = tm; this.maxEntriesPerLedger = cfg.getMaxEntriesPerLedger(); queuer = new TopicOpQueuer(executor); tm.addTopicOwnershipChangeListener(this); } private static LedgerRange buildLedgerRange(long ledgerId, long startOfLedger, MessageSeqId endOfLedger) { LedgerRange.Builder builder = LedgerRange.newBuilder().setLedgerId(ledgerId).setStartSeqIdIncluded(startOfLedger) .setEndSeqIdIncluded(endOfLedger); return builder.build(); } class RangeScanOp extends TopicOpQueuer.SynchronousOp { RangeScanRequest request; int numMessagesRead = 0; long totalSizeRead = 0; TopicInfo topicInfo; long startSeqIdToScan; public RangeScanOp(RangeScanRequest request) { this(request, -1L, 0, 0L); } public RangeScanOp(RangeScanRequest request, long startSeqId, int numMessagesRead, long totalSizeRead) { queuer.super(request.topic); this.request = request; this.startSeqIdToScan = startSeqId; this.numMessagesRead = numMessagesRead; this.totalSizeRead = totalSizeRead; } @Override protected void runInternal() { topicInfo = topicInfos.get(topic); if (topicInfo == null) { request.callback.scanFailed(request.ctx, new PubSubException.ServerNotResponsibleForTopicException("")); return; } // if startSeqIdToScan is less than zero, which means it is an unfinished scan request // we continue the scan from the provided position startReadingFrom(startSeqIdToScan < 0 ? request.startSeqId : startSeqIdToScan); } protected void read(final InMemoryLedgerRange imlr, final long startSeqId, final long endSeqId) { // Verify whether startSeqId falls in ledger range. // Only the left endpoint of range needs to be checked. if (imlr.getStartSeqIdIncluded() > startSeqId) { logger.error( "Invalid RangeScan read, startSeqId {} doesn't fall in ledger range [{} ~ {}]", va(startSeqId, imlr.getStartSeqIdIncluded(), imlr.range.hasEndSeqIdIncluded() ? imlr.range .getEndSeqIdIncluded().getLocalComponent() : "")); request.callback.scanFailed(request.ctx, new PubSubException.UnexpectedConditionException("Scan request is out of range")); // try release topic to reset the state lostTopic(topic); return; } if (imlr.handle == null) { bk.asyncOpenLedger(imlr.range.getLedgerId(), DigestType.CRC32, passwd, new SafeAsynBKCallback.OpenCallback() { @Override public void safeOpenComplete(int rc, LedgerHandle ledgerHandle, Object ctx) { if (rc == BKException.Code.OK) { imlr.handle = ledgerHandle; read(imlr, startSeqId, endSeqId); return; } BKException bke = BKException.create(rc); logger.error("Could not open ledger: " + imlr.range.getLedgerId() + " for topic: " + topic); request.callback.scanFailed(ctx, new PubSubException.ServiceDownException(bke)); return; } }, request.ctx); return; } // ledger handle is not null, we can read from it long correctedEndSeqId = Math.min(startSeqId + request.messageLimit - numMessagesRead - 1, endSeqId); if (logger.isDebugEnabled()) { logger.debug("Issuing a bk read for ledger: " + imlr.handle.getId() + " from entry-id: " + (startSeqId - imlr.getStartSeqIdIncluded()) + " to entry-id: " + (correctedEndSeqId - imlr.getStartSeqIdIncluded())); } imlr.handle.asyncReadEntries(startSeqId - imlr.getStartSeqIdIncluded(), correctedEndSeqId - imlr.getStartSeqIdIncluded(), new SafeAsynBKCallback.ReadCallback() { long expectedEntryId = startSeqId - imlr.getStartSeqIdIncluded(); @Override public void safeReadComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { if (rc != BKException.Code.OK || !seq.hasMoreElements()) { if (rc == BKException.Code.OK) { // means that there is no entries read, provide a meaningful exception rc = BKException.Code.NoSuchEntryException; } BKException bke = BKException.create(rc); logger.error("Error while reading from ledger: " + imlr.range.getLedgerId() + " for topic: " + topic.toStringUtf8(), bke); request.callback.scanFailed(request.ctx, new PubSubException.ServiceDownException(bke)); return; } LedgerEntry entry = null; while (seq.hasMoreElements()) { entry = seq.nextElement(); Message message; try { message = Message.parseFrom(entry.getEntryInputStream()); } catch (IOException e) { String msg = "Unreadable message found in ledger: " + imlr.range.getLedgerId() + " for topic: " + topic.toStringUtf8(); logger.error(msg, e); request.callback.scanFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } logger.debug("Read response from ledger: {} entry-id: {}", lh.getId(), entry.getEntryId()); assert expectedEntryId == entry.getEntryId() : "expectedEntryId (" + expectedEntryId + ") != entry.getEntryId() (" + entry.getEntryId() + ")"; assert (message.getMsgId().getLocalComponent() - imlr.getStartSeqIdIncluded()) == expectedEntryId; expectedEntryId++; request.callback.messageScanned(ctx, message); numMessagesRead++; totalSizeRead += message.getBody().size(); if (numMessagesRead >= request.messageLimit) { request.callback.scanFinished(ctx, ReasonForFinish.NUM_MESSAGES_LIMIT_EXCEEDED); return; } if (totalSizeRead >= request.sizeLimit) { request.callback.scanFinished(ctx, ReasonForFinish.SIZE_LIMIT_EXCEEDED); return; } } // continue scanning messages scanMessages(request, imlr.getStartSeqIdIncluded() + entry.getEntryId() + 1, numMessagesRead, totalSizeRead); } }, request.ctx); } protected void startReadingFrom(long startSeqId) { Map.Entry entry = topicInfo.ledgerRanges.ceilingEntry(startSeqId); if (entry == null) { // None of the old ledgers have this seq-id, we must use the // current ledger long endSeqId = topicInfo.currentLedgerRange.getStartSeqIdIncluded() + topicInfo.lastEntryIdAckedInCurrentLedger; if (endSeqId < startSeqId) { request.callback.scanFinished(request.ctx, ReasonForFinish.NO_MORE_MESSAGES); return; } read(topicInfo.currentLedgerRange, startSeqId, endSeqId); } else { read(entry.getValue(), startSeqId, entry.getValue().range.getEndSeqIdIncluded().getLocalComponent()); } } } @Override public void scanMessages(RangeScanRequest request) { queuer.pushAndMaybeRun(request.topic, new RangeScanOp(request)); } protected void scanMessages(RangeScanRequest request, long scanSeqId, int numMsgsRead, long totalSizeRead) { queuer.pushAndMaybeRun(request.topic, new RangeScanOp(request, scanSeqId, numMsgsRead, totalSizeRead)); } public void deliveredUntil(ByteString topic, Long seqId) { // Nothing to do here. this is just a hint that we cannot use. } class UpdateLedgerOp extends TopicOpQueuer.AsynchronousOp { private Set ledgersDeleted; public UpdateLedgerOp(ByteString topic, final Callback cb, final Object ctx, Set ledgersDeleted) { queuer.super(topic, cb, ctx); this.ledgersDeleted = ledgersDeleted; } @Override public void run() { final TopicInfo topicInfo = topicInfos.get(topic); if (topicInfo == null) { logger.error("Server is not responsible for topic!"); cb.operationFailed(ctx, new PubSubException.ServerNotResponsibleForTopicException("")); return; } LedgerRanges.Builder builder = LedgerRanges.newBuilder(); final Set keysToRemove = new HashSet(); boolean foundUnconsumedLedger = false; for (Map.Entry e : topicInfo.ledgerRanges.entrySet()) { LedgerRange lr = e.getValue().range; long ledgerId = lr.getLedgerId(); if (!foundUnconsumedLedger && ledgersDeleted.contains(ledgerId)) { keysToRemove.add(e.getKey()); if (!lr.hasEndSeqIdIncluded()) { String msg = "Should not remove unclosed ledger " + ledgerId + " for topic " + topic.toStringUtf8(); logger.error(msg); cb.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } } else { foundUnconsumedLedger = true; builder.addRanges(lr); } } builder.addRanges(topicInfo.currentLedgerRange.range); if (!keysToRemove.isEmpty()) { final LedgerRanges newRanges = builder.build(); tpManager.writeTopicPersistenceInfo( topic, newRanges, topicInfo.ledgerRangesVersion, new Callback() { public void operationFinished(Object ctx, Version newVersion) { // Finally, all done for (Long k : keysToRemove) { topicInfo.ledgerRanges.remove(k); } topicInfo.ledgerRangesVersion = newVersion; cb.operationFinished(ctx, null); } public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } }, ctx); } else { cb.operationFinished(ctx, null); } } } class ConsumeUntilOp extends TopicOpQueuer.SynchronousOp { private final long seqId; public ConsumeUntilOp(ByteString topic, long seqId) { queuer.super(topic); this.seqId = seqId; } @Override public void runInternal() { TopicInfo topicInfo = topicInfos.get(topic); if (topicInfo == null) { logger.error("Server is not responsible for topic!"); return; } final LinkedList ledgersToDelete = new LinkedList(); for (Long endSeqIdIncluded : topicInfo.ledgerRanges.keySet()) { if (endSeqIdIncluded <= seqId) { // This ledger's message entries have all been consumed already // so it is safe to delete it from BookKeeper. long ledgerId = topicInfo.ledgerRanges.get(endSeqIdIncluded).range.getLedgerId(); ledgersToDelete.add(ledgerId); } else { break; } } // no ledgers need to delete if (ledgersToDelete.isEmpty()) { return; } Set ledgersDeleted = new HashSet(); deleteLedgersAndUpdateLedgersRange(topic, ledgersToDelete, ledgersDeleted); } } private void deleteLedgersAndUpdateLedgersRange(final ByteString topic, final LinkedList ledgersToDelete, final Set ledgersDeleted) { if (ledgersToDelete.isEmpty()) { Callback cb = new Callback() { public void operationFinished(Object ctx, Void result) { // do nothing, op is async to stop other ops // occurring on the topic during the update } public void operationFailed(Object ctx, PubSubException exception) { logger.error("Failed to update ledger znode for topic {} deleting ledgers {} : {}", va(topic.toStringUtf8(), ledgersDeleted, exception.getMessage())); } }; queuer.pushAndMaybeRun(topic, new UpdateLedgerOp(topic, cb, null, ledgersDeleted)); return; } final Long ledger = ledgersToDelete.poll(); if (null == ledger) { deleteLedgersAndUpdateLedgersRange(topic, ledgersToDelete, ledgersDeleted); return; } bk.asyncDeleteLedger(ledger, new DeleteCallback() { @Override public void deleteComplete(int rc, Object ctx) { if (BKException.Code.NoSuchLedgerExistsException == rc || BKException.Code.OK == rc) { ledgersDeleted.add(ledger); deleteLedgersAndUpdateLedgersRange(topic, ledgersToDelete, ledgersDeleted); return; } else { logger.warn("Exception while deleting consumed ledger {}, stop deleting other ledgers {} " + "and update ledger ranges with deleted ledgers {} : {}", va(ledger, ledgersToDelete, ledgersDeleted, BKException.create(rc))); // We should not continue when failed to delete ledger Callback cb = new Callback() { public void operationFinished(Object ctx, Void result) { // do nothing, op is async to stop other ops // occurring on the topic during the update } public void operationFailed(Object ctx, PubSubException exception) { logger.error("Failed to update ledger znode for topic {} deleting ledgers {} : {}", va(topic, ledgersDeleted, exception.getMessage())); } }; queuer.pushAndMaybeRun(topic, new UpdateLedgerOp(topic, cb, null, ledgersDeleted)); return; } } }, null); } public void consumedUntil(ByteString topic, Long seqId) { queuer.pushAndMaybeRun(topic, new ConsumeUntilOp(topic, Math.max(seqId, getMinSeqIdForTopic(topic)))); } public void consumeToBound(ByteString topic) { TopicInfo topicInfo = topicInfos.get(topic); if (topicInfo == null || topicInfo.messageBound == topicInfo.UNLIMITED) { return; } queuer.pushAndMaybeRun(topic, new ConsumeUntilOp(topic, getMinSeqIdForTopic(topic))); } public long getMinSeqIdForTopic(ByteString topic) { TopicInfo topicInfo = topicInfos.get(topic); if (topicInfo == null || topicInfo.messageBound == topicInfo.UNLIMITED) { return Long.MIN_VALUE; } else { return (topicInfo.lastSeqIdPushed.getLocalComponent() - topicInfo.messageBound) + 1; } } public MessageSeqId getCurrentSeqIdForTopic(ByteString topic) throws ServerNotResponsibleForTopicException { TopicInfo topicInfo = topicInfos.get(topic); if (topicInfo == null) { throw new PubSubException.ServerNotResponsibleForTopicException(""); } return topicInfo.lastSeqIdPushed; } public long getSeqIdAfterSkipping(ByteString topic, long seqId, int skipAmount) { return Math.max(seqId + skipAmount, getMinSeqIdForTopic(topic)); } /** * Release topic on failure * * @param topic * Topic Name * @param e * Failure Exception * @param ctx * Callback context */ protected void releaseTopicIfRequested(final ByteString topic, Exception e, Object ctx) { TopicInfo topicInfo = topicInfos.get(topic); if (topicInfo == null) { logger.warn("No topic found when trying to release ownership of topic " + topic.toStringUtf8() + " on failure."); return; } // do release owner ship of topic if (topicInfo.doRelease.compareAndSet(false, true)) { logger.info("Release topic " + topic.toStringUtf8() + " when bookkeeper persistence mananger encounters failure :", e); tm.releaseTopic(topic, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("Exception found on releasing topic " + topic.toStringUtf8() + " when encountering exception from bookkeeper:", exception); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { logger.info("successfully releasing topic {} when encountering" + " exception from bookkeeper", topic.toStringUtf8()); } }, null); } // if release happens when the topic is changing ledger // we need to fail all queued persist requests if (topicInfo.doChangeLedger.get()) { for (PersistRequest pr : topicInfo.deferredRequests) { pr.getCallback().operationFailed(ctx, new PubSubException.ServiceDownException(e)); } topicInfo.deferredRequests.clear(); topicInfo.lastSeqIdBeforeLedgerChange = -1; } } public class PersistOp extends TopicOpQueuer.SynchronousOp { PersistRequest request; public PersistOp(PersistRequest request) { queuer.super(request.topic); this.request = request; } @Override public void runInternal() { doPersistMessage(request); } } /** * Persist a message by executing a persist request. */ protected void doPersistMessage(final PersistRequest request) { final ByteString topic = request.topic; final TopicInfo topicInfo = topicInfos.get(topic); if (topicInfo == null) { request.getCallback().operationFailed(request.ctx, new PubSubException.ServerNotResponsibleForTopicException("")); return; } if (topicInfo.doRelease.get()) { request.getCallback().operationFailed(request.ctx, new PubSubException.ServiceDownException( "The ownership of the topic is releasing due to unrecoverable issue.")); return; } // if the topic is changing ledger, queue following persist requests until ledger is changed if (topicInfo.doChangeLedger.get()) { logger.info("Topic {} is changing ledger, so queue persist request for message.", topic.toStringUtf8()); topicInfo.deferredRequests.add(request); return; } final long localSeqId = topicInfo.lastSeqIdPushed.getLocalComponent() + 1; MessageSeqId.Builder builder = MessageSeqId.newBuilder(); if (request.message.hasMsgId()) { MessageIdUtils.takeRegionMaximum(builder, topicInfo.lastSeqIdPushed, request.message.getMsgId()); } else { builder.addAllRemoteComponents(topicInfo.lastSeqIdPushed.getRemoteComponentsList()); } builder.setLocalComponent(localSeqId); // check whether reach the threshold of a ledger, if it does, // open a ledger to write long entriesInThisLedger = localSeqId - topicInfo.currentLedgerRange.getStartSeqIdIncluded() + 1; if (UNLIMITED_ENTRIES != maxEntriesPerLedger && entriesInThisLedger >= maxEntriesPerLedger) { if (topicInfo.doChangeLedger.compareAndSet(false, true)) { // for order guarantees, we should wait until all the adding operations for current ledger // are succeed. so we just mark it as lastSeqIdBeforeLedgerChange // when the lastSeqIdBeforeLedgerChange acked, we do changing the ledger if (null == topicInfo.deferredRequests) { topicInfo.deferredRequests = new LinkedList(); } topicInfo.lastSeqIdBeforeLedgerChange = localSeqId; } } topicInfo.lastSeqIdPushed = builder.build(); Message msgToSerialize = Message.newBuilder(request.message).setMsgId(topicInfo.lastSeqIdPushed).build(); final MessageSeqId responseSeqId = msgToSerialize.getMsgId(); topicInfo.currentLedgerRange.handle.asyncAddEntry(msgToSerialize.toByteArray(), new SafeAsynBKCallback.AddCallback() { AtomicBoolean processed = new AtomicBoolean(false); @Override public void safeAddComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { // avoid double callback by mistake, since we may do change ledger in this callback. if (!processed.compareAndSet(false, true)) { return; } if (rc != BKException.Code.OK) { BKException bke = BKException.create(rc); logger.error("Error while persisting entry to ledger: " + lh.getId() + " for topic: " + topic.toStringUtf8(), bke); request.getCallback().operationFailed(ctx, new PubSubException.ServiceDownException(bke)); // To preserve ordering guarantees, we // should give up the topic and not let // other operations through releaseTopicIfRequested(request.topic, bke, ctx); return; } if (entryId + topicInfo.currentLedgerRange.getStartSeqIdIncluded() != localSeqId) { String msg = "Expected BK to assign entry-id: " + (localSeqId - topicInfo.currentLedgerRange.getStartSeqIdIncluded()) + " but it instead assigned entry-id: " + entryId + " topic: " + topic.toStringUtf8() + "ledger: " + lh.getId(); logger.error(msg); throw new UnexpectedError(msg); } topicInfo.lastEntryIdAckedInCurrentLedger = entryId; request.getCallback().operationFinished(ctx, responseSeqId); // if this acked entry is the last entry of current ledger // we can add a ChangeLedgerOp to execute to change ledger if (topicInfo.doChangeLedger.get() && entryId + topicInfo.currentLedgerRange.getStartSeqIdIncluded() == topicInfo.lastSeqIdBeforeLedgerChange) { // change ledger changeLedger(topic, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("Failed to change ledger for topic " + topic.toStringUtf8(), exception); // change ledger failed, we should give up topic releaseTopicIfRequested(request.topic, exception, ctx); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { topicInfo.doChangeLedger.set(false); topicInfo.lastSeqIdBeforeLedgerChange = -1; // the ledger is changed, persist queued requests // if the number of queued persist requests is more than maxEntriesPerLedger // we just persist maxEntriesPerLedger requests, other requests are still queued // until next ledger changed. int numRequests = 0; while (!topicInfo.deferredRequests.isEmpty() && numRequests < maxEntriesPerLedger) { PersistRequest pr = topicInfo.deferredRequests.removeFirst(); doPersistMessage(pr); ++numRequests; } logger.debug("Finished persisting {} queued requests, but there are still {} requests in queue.", numRequests, topicInfo.deferredRequests.size()); } }, ctx); } } }, request.ctx); } public void persistMessage(PersistRequest request) { queuer.pushAndMaybeRun(request.topic, new PersistOp(request)); } public void scanSingleMessage(ScanRequest request) { throw new RuntimeException("Not implemented"); } static SafeAsynBKCallback.CloseCallback noOpCloseCallback = new SafeAsynBKCallback.CloseCallback() { @Override public void safeCloseComplete(int rc, LedgerHandle ledgerHandle, Object ctx) { }; }; class AcquireOp extends TopicOpQueuer.AsynchronousOp { public AcquireOp(ByteString topic, Callback cb, Object ctx) { queuer.super(topic, cb, ctx); } @Override public void run() { if (topicInfos.containsKey(topic)) { // Already acquired, do nothing cb.operationFinished(ctx, null); return; } // read persistence info tpManager.readTopicPersistenceInfo(topic, new Callback>() { @Override public void operationFinished(Object ctx, Versioned ranges) { if (null != ranges) { processTopicLedgerRanges(ranges.getValue(), ranges.getVersion()); } else { processTopicLedgerRanges(LedgerRanges.getDefaultInstance(), Version.NEW); } } @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } }, ctx); } void processTopicLedgerRanges(final LedgerRanges ranges, final Version version) { final List rangesList = ranges.getRangesList(); if (!rangesList.isEmpty()) { LedgerRange range = rangesList.get(0); if (range.hasStartSeqIdIncluded()) { // we already have start seq id processTopicLedgerRanges(rangesList, version, range.getStartSeqIdIncluded()); return; } getStartSeqIdToProcessTopicLedgerRanges(rangesList, version); return; } // process topic ledger ranges directly processTopicLedgerRanges(rangesList, version, START_SEQ_ID); } /** * Process old version ledger ranges to fetch start seq id. */ void getStartSeqIdToProcessTopicLedgerRanges( final List rangesList, final Version version) { final LedgerRange range = rangesList.get(0); if (!range.hasEndSeqIdIncluded()) { // process topic ledger ranges directly processTopicLedgerRanges(rangesList, version, START_SEQ_ID); return; } final long ledgerId = range.getLedgerId(); // open the first ledger to compute right start seq id bk.asyncOpenLedger(ledgerId, DigestType.CRC32, passwd, new SafeAsynBKCallback.OpenCallback() { @Override public void safeOpenComplete(int rc, LedgerHandle ledgerHandle, Object ctx) { if (rc == BKException.Code.NoSuchLedgerExistsException) { // process next ledger processTopicLedgerRanges(rangesList, version, START_SEQ_ID); return; } else if (rc != BKException.Code.OK) { BKException bke = BKException.create(rc); logger.error("Could not open ledger {} to get start seq id while acquiring topic {} : {}", va(ledgerId, topic.toStringUtf8(), bke)); cb.operationFailed(ctx, new PubSubException.ServiceDownException(bke)); return; } final long numEntriesInLastLedger = ledgerHandle.getLastAddConfirmed() + 1; // the ledger is closed before, calling close is just a nop operation. try { ledgerHandle.close(); } catch (InterruptedException ie) { // the exception would never be thrown for a read only ledger handle. } catch (BKException bke) { // the exception would never be thrown for a read only ledger handle. } if (numEntriesInLastLedger <= 0) { String msg = "No entries found in a have-end-seq-id ledger " + ledgerId + " when acquiring topic " + topic.toStringUtf8() + "."; logger.error(msg); cb.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } long endOfLedger = range.getEndSeqIdIncluded().getLocalComponent(); long startOfLedger = endOfLedger - numEntriesInLastLedger + 1; processTopicLedgerRanges(rangesList, version, startOfLedger); } }, ctx); } void processTopicLedgerRanges(final List rangesList, final Version version, long startOfLedger) { logger.info("Process {} ledgers for topic {} starting from seq id {}.", va(rangesList.size(), topic.toStringUtf8(), startOfLedger)); Iterator lrIterator = rangesList.iterator(); TopicInfo topicInfo = new TopicInfo(); while (lrIterator.hasNext()) { LedgerRange range = lrIterator.next(); if (range.hasEndSeqIdIncluded()) { // this means it was a valid and completely closed ledger long endOfLedger = range.getEndSeqIdIncluded().getLocalComponent(); if (range.hasStartSeqIdIncluded()) { startOfLedger = range.getStartSeqIdIncluded(); } else { range = buildLedgerRange(range.getLedgerId(), startOfLedger, range.getEndSeqIdIncluded()); } topicInfo.ledgerRanges.put(endOfLedger, new InMemoryLedgerRange(range)); if (startOfLedger < endOfLedger + 1) { startOfLedger = endOfLedger + 1; } continue; } // If it doesn't have a valid end, it must be the last ledger if (lrIterator.hasNext()) { String msg = "Ledger-id: " + range.getLedgerId() + " for topic: " + topic.toStringUtf8() + " is not the last one but still does not have an end seq-id"; logger.error(msg); cb.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } if (range.hasStartSeqIdIncluded()) { startOfLedger = range.getStartSeqIdIncluded(); } // The last ledger does not have a valid seq-id, lets try to // find it out recoverLastTopicLedgerAndOpenNewOne(range.getLedgerId(), startOfLedger, version, topicInfo); return; } // All ledgers were found properly closed, just start a new one openNewTopicLedger(topic, version, topicInfo, startOfLedger, false, cb, ctx); } /** * Recovers the last ledger, opens a new one, and persists the new * information to ZK * * @param ledgerId * Ledger to be recovered * @param expectedStartSeqId * Start seq id of the ledger to recover * @param expectedVersionOfLedgerNode * Expected version to update ledgers range * @param topicInfo * Topic info */ private void recoverLastTopicLedgerAndOpenNewOne(final long ledgerId, final long expectedStartSeqId, final Version expectedVersionOfLedgerNode, final TopicInfo topicInfo) { bk.asyncOpenLedger(ledgerId, DigestType.CRC32, passwd, new SafeAsynBKCallback.OpenCallback() { @Override public void safeOpenComplete(int rc, LedgerHandle ledgerHandle, Object ctx) { if (rc != BKException.Code.OK) { BKException bke = BKException.create(rc); logger.error("While acquiring topic: " + topic.toStringUtf8() + ", could not open unrecovered ledger: " + ledgerId, bke); cb.operationFailed(ctx, new PubSubException.ServiceDownException(bke)); return; } final long numEntriesInLastLedger = ledgerHandle.getLastAddConfirmed() + 1; if (numEntriesInLastLedger <= 0) { // this was an empty ledger that someone created but // couldn't write to, so just ignore it logger.info("Pruning empty ledger: " + ledgerId + " for topic: " + topic.toStringUtf8()); closeLedger(ledgerHandle); openNewTopicLedger(topic, expectedVersionOfLedgerNode, topicInfo, expectedStartSeqId, false, cb, ctx); return; } // we have to read the last entry of the ledger to find // out the last seq-id ledgerHandle.asyncReadEntries(numEntriesInLastLedger - 1, numEntriesInLastLedger - 1, new SafeAsynBKCallback.ReadCallback() { @Override public void safeReadComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { if (rc != BKException.Code.OK || !seq.hasMoreElements()) { if (rc == BKException.Code.OK) { // means that there is no entries read, provide a meaningful exception rc = BKException.Code.NoSuchEntryException; } logger.info("Received error code {}", rc); BKException bke = BKException.create(rc); logger.error("While recovering ledger: " + ledgerId + " for topic: " + topic.toStringUtf8() + ", could not read last entry", bke); cb.operationFailed(ctx, new PubSubException.ServiceDownException(bke)); return; } Message lastMessage; try { lastMessage = Message.parseFrom(seq.nextElement().getEntry()); } catch (InvalidProtocolBufferException e) { String msg = "While recovering ledger: " + ledgerId + " for topic: " + topic.toStringUtf8() + ", could not deserialize last message"; logger.error(msg, e); cb.operationFailed(ctx, new PubSubException.UnexpectedConditionException(msg)); return; } long endOfLedger = lastMessage.getMsgId().getLocalComponent(); long startOfLedger = endOfLedger - numEntriesInLastLedger + 1; if (startOfLedger != expectedStartSeqId) { // gap would be introduced by old version when gc consumed ledgers String msg = "Expected start seq id of recovered ledger " + ledgerId + " to be " + expectedStartSeqId + " but it was " + startOfLedger + "."; logger.warn(msg); } LedgerRange lr = buildLedgerRange(ledgerId, startOfLedger, lastMessage.getMsgId()); topicInfo.ledgerRanges.put(endOfLedger, new InMemoryLedgerRange(lr, lh)); logger.info("Recovered unclosed ledger: {} for topic: {} with {} entries starting from seq id {}", va(ledgerId, topic.toStringUtf8(), numEntriesInLastLedger, startOfLedger)); openNewTopicLedger(topic, expectedVersionOfLedgerNode, topicInfo, endOfLedger + 1, false, cb, ctx); } }, ctx); } }, ctx); } } /** * Open New Ledger to write for a topic. * * @param topic * Topic Name * @param expectedVersionOfLedgersNode * Expected Version to Update Ledgers Node. * @param topicInfo * Topic Information * @param startSeqId * Start of sequence id for new ledger * @param changeLedger * Whether is it called when changing ledger * @param cb * Callback to trigger after opening new ledger. * @param ctx * Callback context. */ void openNewTopicLedger(final ByteString topic, final Version expectedVersionOfLedgersNode, final TopicInfo topicInfo, final long startSeqId, final boolean changeLedger, final Callback cb, final Object ctx) { bk.asyncCreateLedger(cfg.getBkEnsembleSize(), cfg.getBkWriteQuorumSize(), cfg.getBkAckQuorumSize(), DigestType.CRC32, passwd, new SafeAsynBKCallback.CreateCallback() { AtomicBoolean processed = new AtomicBoolean(false); @Override public void safeCreateComplete(int rc, LedgerHandle lh, Object ctx) { if (!processed.compareAndSet(false, true)) { return; } if (rc != BKException.Code.OK) { BKException bke = BKException.create(rc); logger.error("Could not create new ledger while acquiring topic: " + topic.toStringUtf8(), bke); cb.operationFailed(ctx, new PubSubException.ServiceDownException(bke)); return; } // compute last seq id if (!changeLedger) { topicInfo.lastSeqIdPushed = topicInfo.ledgerRanges.isEmpty() ? MessageSeqId.newBuilder() .setLocalComponent(startSeqId - 1).build() : topicInfo.ledgerRanges.lastEntry().getValue().range .getEndSeqIdIncluded(); } LedgerRange lastRange = LedgerRange.newBuilder().setLedgerId(lh.getId()) .setStartSeqIdIncluded(startSeqId).build(); topicInfo.currentLedgerRange = new InMemoryLedgerRange(lastRange, lh); topicInfo.lastEntryIdAckedInCurrentLedger = -1; // Persist the fact that we started this new // ledger to ZK LedgerRanges.Builder builder = LedgerRanges.newBuilder(); for (InMemoryLedgerRange imlr : topicInfo.ledgerRanges.values()) { builder.addRanges(imlr.range); } builder.addRanges(lastRange); tpManager.writeTopicPersistenceInfo( topic, builder.build(), expectedVersionOfLedgersNode, new Callback() { @Override public void operationFinished(Object ctx, Version newVersion) { // Finally, all done topicInfo.ledgerRangesVersion = newVersion; topicInfos.put(topic, topicInfo); cb.operationFinished(ctx, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } }, ctx); return; } }, ctx); } /** * acquire ownership of a topic, doing whatever is needed to be able to * perform reads and writes on that topic from here on * * @param topic * @param callback * @param ctx */ @Override public void acquiredTopic(ByteString topic, Callback callback, Object ctx) { queuer.pushAndMaybeRun(topic, new AcquireOp(topic, callback, ctx)); } /** * Change ledger to write for a topic. */ class ChangeLedgerOp extends TopicOpQueuer.AsynchronousOp { public ChangeLedgerOp(ByteString topic, Callback cb, Object ctx) { queuer.super(topic, cb, ctx); } @Override public void run() { TopicInfo topicInfo = topicInfos.get(topic); if (null == topicInfo) { logger.error("Weired! hub server doesn't own topic " + topic.toStringUtf8() + " when changing ledger to write."); cb.operationFailed(ctx, new PubSubException.ServerNotResponsibleForTopicException("")); return; } closeLastTopicLedgerAndOpenNewOne(topicInfo); } private void closeLastTopicLedgerAndOpenNewOne(final TopicInfo topicInfo) { final long ledgerId = topicInfo.currentLedgerRange.handle.getId(); topicInfo.currentLedgerRange.handle.asyncClose(new CloseCallback() { AtomicBoolean processed = new AtomicBoolean(false); @Override public void closeComplete(int rc, LedgerHandle lh, Object ctx) { if (!processed.compareAndSet(false, true)) { return; } if (BKException.Code.OK != rc) { BKException bke = BKException.create(rc); logger.error("Could not close ledger " + ledgerId + " while changing ledger of topic " + topic.toStringUtf8(), bke); cb.operationFailed(ctx, new PubSubException.ServiceDownException(bke)); return; } long endSeqId = topicInfo.lastSeqIdPushed.getLocalComponent(); // update last range LedgerRange lastRange = buildLedgerRange(ledgerId, topicInfo.currentLedgerRange.getStartSeqIdIncluded(), topicInfo.lastSeqIdPushed); topicInfo.currentLedgerRange.range = lastRange; // put current ledger to ledger ranges topicInfo.ledgerRanges.put(endSeqId, topicInfo.currentLedgerRange); logger.info("Closed written ledger " + ledgerId + " for topic " + topic.toStringUtf8() + " to change ledger."); openNewTopicLedger(topic, topicInfo.ledgerRangesVersion, topicInfo, endSeqId + 1, true, cb, ctx); } }, ctx); } } /** * Change ledger to write for a topic. * * @param topic * Topic Name */ protected void changeLedger(ByteString topic, Callback cb, Object ctx) { queuer.pushAndMaybeRun(topic, new ChangeLedgerOp(topic, cb, ctx)); } public void closeLedger(LedgerHandle lh) { // try { // lh.asyncClose(noOpCloseCallback, null); // } catch (InterruptedException e) { // logger.error(e); // Thread.currentThread().interrupt(); // } } class ReleaseOp extends TopicOpQueuer.SynchronousOp { public ReleaseOp(ByteString topic) { queuer.super(topic); } @Override public void runInternal() { TopicInfo topicInfo = topicInfos.remove(topic); if (topicInfo == null) { return; } for (InMemoryLedgerRange imlr : topicInfo.ledgerRanges.values()) { if (imlr.handle != null) { closeLedger(imlr.handle); } } if (topicInfo.currentLedgerRange != null && topicInfo.currentLedgerRange.handle != null) { closeLedger(topicInfo.currentLedgerRange.handle); } } } /** * Release any resources for the topic that might be currently held. There * wont be any subsequent reads or writes on that topic coming * * @param topic */ @Override public void lostTopic(ByteString topic) { queuer.pushAndMaybeRun(topic, new ReleaseOp(topic)); } class SetMessageBoundOp extends TopicOpQueuer.SynchronousOp { final int bound; public SetMessageBoundOp(ByteString topic, int bound) { queuer.super(topic); this.bound = bound; } @Override public void runInternal() { TopicInfo topicInfo = topicInfos.get(topic); if (topicInfo != null) { topicInfo.messageBound = bound; } } } public void setMessageBound(ByteString topic, Integer bound) { queuer.pushAndMaybeRun(topic, new SetMessageBoundOp(topic, bound)); } public void clearMessageBound(ByteString topic) { queuer.pushAndMaybeRun(topic, new SetMessageBoundOp(topic, TopicInfo.UNLIMITED)); } @Override public void stop() { try { tpManager.close(); } catch (IOException ioe) { logger.warn("Exception closing topic persistence manager : ", ioe); } } } CacheKey.java000066400000000000000000000042131244507361200340510ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import com.google.protobuf.ByteString; import org.apache.hedwig.server.common.ByteStringInterner; public class CacheKey { ByteString topic; long seqId; public CacheKey(ByteString topic, long seqId) { this.topic = ByteStringInterner.intern(topic); this.seqId = seqId; } public ByteString getTopic() { return topic; } public long getSeqId() { return seqId; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + (int) (seqId ^ (seqId >>> 32)); result = prime * result + ((topic == null) ? 0 : topic.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; CacheKey other = (CacheKey) obj; if (seqId != other.seqId) return false; if (topic == null) { if (other.topic != null) return false; } else if (!topic.equals(other.topic)) return false; return true; } @Override public String toString() { return "(" + topic.toStringUtf8() + "," + seqId + ")"; } } CacheValue.java000066400000000000000000000063641244507361200344060ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.util.HashSet; import java.util.Set; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.server.common.UnexpectedError; /** * This class is NOT thread safe. It need not be thread-safe because our * read-ahead cache will operate with only 1 thread * */ public class CacheValue { static Logger logger = LoggerFactory.getLogger(ReadAheadCache.class); // Actually we don't care the order of callbacks // when a scan callback, it should be delivered to both callbacks Set callbacks = new HashSet(); Message message; long timeOfAddition = 0; public CacheValue() { } public boolean isStub() { return message == null; } public long getTimeOfAddition() { if (message == null) { throw new UnexpectedError("Time of add requested from a stub"); } return timeOfAddition; } public void setMessageAndInvokeCallbacks(Message message, long currTime) { if (this.message != null) { // Duplicate read for the same message coming back return; } this.message = message; this.timeOfAddition = currTime; logger.debug("Invoking {} callbacks for {} message added to cache", callbacks.size(), message); for (ScanCallbackWithContext callbackWithCtx : callbacks) { if (null != callbackWithCtx) { callbackWithCtx.getScanCallback().messageScanned(callbackWithCtx.getCtx(), message); } } } public boolean removeCallback(ScanCallback callback, Object ctx) { return callbacks.remove(new ScanCallbackWithContext(callback, ctx)); } public void addCallback(ScanCallback callback, Object ctx) { if (!isStub()) { // call the callback right away callback.messageScanned(ctx, message); return; } callbacks.add(new ScanCallbackWithContext(callback, ctx)); } public Message getMessage() { return message; } public void setErrorAndInvokeCallbacks(Exception exception) { for (ScanCallbackWithContext callbackWithCtx : callbacks) { if (null != callbackWithCtx) { callbackWithCtx.getScanCallback().scanFailed(callbackWithCtx.getCtx(), exception); } } } } CancelScanRequest.java000066400000000000000000000017421244507361200357440ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; public interface CancelScanRequest { /** * @return the scan request to cancel */ public ScanRequest getScanRequest(); } Factory.java000066400000000000000000000016221244507361200340050ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; public interface Factory { public T newInstance(); } LocalDBPersistenceManager.java000066400000000000000000000430661244507361200373460ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.io.File; import java.io.IOException; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.math.BigInteger; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import javax.sql.rowset.serial.SerialBlob; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.exceptions.PubSubException.UnexpectedConditionException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.server.persistence.ScanCallback.ReasonForFinish; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.FileUtils; public class LocalDBPersistenceManager implements PersistenceManagerWithRangeScan { static Logger logger = LoggerFactory.getLogger(LocalDBPersistenceManager.class); static String connectionURL; static { try { File tempDir = FileUtils.createTempDirectory("derby", null); // Since derby needs to create it, I will have to delete it first if (!tempDir.delete()) { throw new IOException("Could not delete dir: " + tempDir.getAbsolutePath()); } connectionURL = "jdbc:derby:" + tempDir.getAbsolutePath() + ";create=true"; } catch (IOException e) { throw new RuntimeException(e); } } private static final ThreadLocal threadLocalConnection = new ThreadLocal() { @Override protected Connection initialValue() { try { return DriverManager.getConnection(connectionURL); } catch (SQLException e) { logger.error("Could not connect to derby", e); return null; } } }; private static final ThreadLocal threadLocalDigest = new ThreadLocal() { @Override protected MessageDigest initialValue() { try { return MessageDigest.getInstance("MD5"); } catch (NoSuchAlgorithmException e) { logger.error("Could not find MD5 hash", e); return null; } } }; static final String ID_FIELD_NAME = "id"; static final String MSG_FIELD_NAME = "msg"; static final String driver = "org.apache.derby.jdbc.EmbeddedDriver"; static final int SCAN_CHUNK = 1000; /** * Having trouble restarting the database multiple times from within the * same jvm. Hence to facilitate units tests, we are just going to have a * version number that we will append to every table name. This version * number will be incremented in lieu of shutting down the database and * restarting it, so that we get different table names, and it behaves like * a brand new database */ private int version = 0; ConcurrentMap currTopicSeqIds = new ConcurrentHashMap(); static LocalDBPersistenceManager instance = new LocalDBPersistenceManager(); public static LocalDBPersistenceManager instance() { return instance; } private LocalDBPersistenceManager() { try { Class.forName(driver).newInstance(); logger.info("Derby Driver loaded"); } catch (java.lang.ClassNotFoundException e) { logger.error("Derby driver not found", e); } catch (InstantiationException e) { logger.error("Could not instantiate derby driver", e); } catch (IllegalAccessException e) { logger.error("Could not instantiate derby driver", e); } } @Override public void stop() { // do nothing } /** * Ensures that at least the default seq-id exists in the map for the given * topic. Checks for race conditions (.e.g, another thread inserts the * default id before us), and returns the latest seq-id value in the map * * @param topic * @return */ private MessageSeqId ensureSeqIdExistsForTopic(ByteString topic) { MessageSeqId presentSeqIdInMap = currTopicSeqIds.get(topic); if (presentSeqIdInMap != null) { return presentSeqIdInMap; } presentSeqIdInMap = MessageSeqId.newBuilder().setLocalComponent(0).build(); MessageSeqId oldSeqIdInMap = currTopicSeqIds.putIfAbsent(topic, presentSeqIdInMap); if (oldSeqIdInMap != null) { return oldSeqIdInMap; } return presentSeqIdInMap; } /** * Adjust the current seq id of the topic based on the message we are about * to publish. The local component of the current seq-id is always * incremented by 1. For the other components, there are two cases: * * 1. If the message to be published doesn't have a seq-id (locally * published messages), the other components are left as is. * * 2. If the message to be published has a seq-id, we take the max of the * current one we have, and that in the message to be published. * * @param topic * @param messageToPublish * @return The value of the local seq-id obtained after incrementing the * local component. This value should be used as an id while * persisting to Derby * @throws UnexpectedConditionException */ private long adjustTopicSeqIdForPublish(ByteString topic, Message messageToPublish) throws UnexpectedConditionException { long retValue = 0; MessageSeqId oldId; MessageSeqId.Builder newIdBuilder = MessageSeqId.newBuilder(); do { oldId = ensureSeqIdExistsForTopic(topic); // Increment our own component by 1 retValue = oldId.getLocalComponent() + 1; newIdBuilder.setLocalComponent(retValue); if (messageToPublish.hasMsgId()) { // take a region-wise max MessageIdUtils.takeRegionMaximum(newIdBuilder, messageToPublish.getMsgId(), oldId); } else { newIdBuilder.addAllRemoteComponents(oldId.getRemoteComponentsList()); } } while (!currTopicSeqIds.replace(topic, oldId, newIdBuilder.build())); return retValue; } public long getSeqIdAfterSkipping(ByteString topic, long seqId, int skipAmount) { return seqId + skipAmount; } public void persistMessage(PersistRequest request) { Connection conn = threadLocalConnection.get(); Callback callback = request.getCallback(); Object ctx = request.getCtx(); ByteString topic = request.getTopic(); Message message = request.getMessage(); if (conn == null) { callback.operationFailed(ctx, new ServiceDownException("Not connected to derby")); return; } long seqId; try { seqId = adjustTopicSeqIdForPublish(topic, message); } catch (UnexpectedConditionException e) { callback.operationFailed(ctx, e); return; } PreparedStatement stmt; boolean triedCreatingTable = false; while (true) { try { message.getBody(); stmt = conn.prepareStatement("INSERT INTO " + getTableNameForTopic(topic) + " VALUES(?,?)"); stmt.setLong(1, seqId); stmt.setBlob(2, new SerialBlob(message.toByteArray())); int rowCount = stmt.executeUpdate(); stmt.close(); if (rowCount != 1) { logger.error("Unexpected number of affected rows from derby"); callback.operationFailed(ctx, new ServiceDownException("Unexpected response from derby")); return; } break; } catch (SQLException sqle) { String theError = (sqle).getSQLState(); if (theError.equals("42X05") && !triedCreatingTable) { createTable(conn, topic); triedCreatingTable = true; continue; } logger.error("Error while executing derby insert", sqle); callback.operationFailed(ctx, new ServiceDownException(sqle)); return; } } callback.operationFinished(ctx, MessageIdUtils.mergeLocalSeqId(message, seqId).getMsgId()); } /* * This method does not throw an exception because another thread might * sneak in and create the table before us */ private void createTable(Connection conn, ByteString topic) { Statement stmt = null; try { stmt = conn.createStatement(); String tableName = getTableNameForTopic(topic); stmt.execute("CREATE TABLE " + tableName + " (" + ID_FIELD_NAME + " BIGINT NOT NULL CONSTRAINT ID_PK_" + tableName + " PRIMARY KEY," + MSG_FIELD_NAME + " BLOB(2M) NOT NULL)"); } catch (SQLException e) { logger.debug("Could not create table", e); } finally { try { if (stmt != null) { stmt.close(); } } catch (SQLException e) { logger.error("Error closing statement", e); } } } public MessageSeqId getCurrentSeqIdForTopic(ByteString topic) { return ensureSeqIdExistsForTopic(topic); } public void scanSingleMessage(ScanRequest request) { scanMessagesInternal(request.getTopic(), request.getStartSeqId(), 1, Long.MAX_VALUE, request.getCallback(), request.getCtx(), 1); return; } public void scanMessages(RangeScanRequest request) { scanMessagesInternal(request.getTopic(), request.getStartSeqId(), request.getMessageLimit(), request .getSizeLimit(), request.getCallback(), request.getCtx(), SCAN_CHUNK); return; } private String getTableNameForTopic(ByteString topic) { String src = (topic.toStringUtf8() + "_" + version); threadLocalDigest.get().reset(); byte[] digest = threadLocalDigest.get().digest(src.getBytes()); BigInteger bigInt = new BigInteger(1,digest); return String.format("TABLE_%032X", bigInt); } private void scanMessagesInternal(ByteString topic, long startSeqId, int messageLimit, long sizeLimit, ScanCallback callback, Object ctx, int scanChunk) { Connection conn = threadLocalConnection.get(); if (conn == null) { callback.scanFailed(ctx, new ServiceDownException("Not connected to derby")); return; } long currentSeqId; currentSeqId = startSeqId; PreparedStatement stmt = null; try { try { stmt = conn.prepareStatement("SELECT * FROM " + getTableNameForTopic(topic) + " WHERE " + ID_FIELD_NAME + " >= ? AND " + ID_FIELD_NAME + " <= ?"); } catch (SQLException sqle) { String theError = (sqle).getSQLState(); if (theError.equals("42X05")) { // No table, scan is over callback.scanFinished(ctx, ReasonForFinish.NO_MORE_MESSAGES); return; } else { throw sqle; } } int numMessages = 0; long totalSize = 0; while (true) { stmt.setLong(1, currentSeqId); stmt.setLong(2, currentSeqId + scanChunk); if (!stmt.execute()) { String errorMsg = "Select query did not return a result set"; logger.error(errorMsg); stmt.close(); callback.scanFailed(ctx, new ServiceDownException(errorMsg)); return; } ResultSet resultSet = stmt.getResultSet(); if (!resultSet.next()) { stmt.close(); callback.scanFinished(ctx, ReasonForFinish.NO_MORE_MESSAGES); return; } do { long localSeqId = resultSet.getLong(1); Message.Builder messageBuilder = Message.newBuilder().mergeFrom(resultSet.getBinaryStream(2)); // Merge in the local seq-id since that is not stored with // the message Message message = MessageIdUtils.mergeLocalSeqId(messageBuilder, localSeqId); callback.messageScanned(ctx, message); numMessages++; totalSize += message.getBody().size(); if (numMessages > messageLimit) { stmt.close(); callback.scanFinished(ctx, ReasonForFinish.NUM_MESSAGES_LIMIT_EXCEEDED); return; } else if (totalSize > sizeLimit) { stmt.close(); callback.scanFinished(ctx, ReasonForFinish.SIZE_LIMIT_EXCEEDED); return; } } while (resultSet.next()); currentSeqId += SCAN_CHUNK; } } catch (SQLException e) { logger.error("SQL Exception", e); callback.scanFailed(ctx, new ServiceDownException(e)); return; } catch (IOException e) { logger.error("Message stored in derby is not parseable", e); callback.scanFailed(ctx, new ServiceDownException(e)); return; } finally { try { if (stmt != null) { stmt.close(); } } catch (SQLException e) { logger.error("Error closing statement", e); } } } public void deliveredUntil(ByteString topic, Long seqId) { // noop } public void consumedUntil(ByteString topic, Long seqId) { Connection conn = threadLocalConnection.get(); if (conn == null) { logger.error("Not connected to derby"); return; } PreparedStatement stmt = null; try { stmt = conn.prepareStatement("DELETE FROM " + getTableNameForTopic(topic) + " WHERE " + ID_FIELD_NAME + " <= ?"); stmt.setLong(1, seqId); int rowCount = stmt.executeUpdate(); if (logger.isDebugEnabled()) { logger.debug("Deleted " + rowCount + " records for topic: " + topic.toStringUtf8() + ", seqId: " + seqId); } } catch (SQLException sqle) { String theError = (sqle).getSQLState(); if (theError.equals("42X05")) { logger.warn("Table for topic (" + topic + ") does not exist so no consumed messages to delete!"); } else logger.error("Error while executing derby delete for consumed messages", sqle); } finally { try { if (stmt != null) { stmt.close(); } } catch (SQLException e) { logger.error("Error closing statement", e); } } } public void setMessageBound(ByteString topic, Integer bound) { // noop; Maybe implement later } public void clearMessageBound(ByteString topic) { // noop; Maybe implement later } public void consumeToBound(ByteString topic) { // noop; Maybe implement later } @Override protected void finalize() throws Throwable { if (driver.equals("org.apache.derby.jdbc.EmbeddedDriver")) { boolean gotSQLExc = false; // This is weird: on normal shutdown, it throws an exception try { DriverManager.getConnection("jdbc:derby:;shutdown=true").close(); } catch (SQLException se) { if (se.getSQLState().equals("XJ015")) { gotSQLExc = true; } } if (!gotSQLExc) { logger.error("Database did not shut down normally"); } else { logger.info("Database shut down normally"); } } super.finalize(); } public void reset() { // just move the namespace over to the next one version++; currTopicSeqIds.clear(); } } MapMethods.java000066400000000000000000000036121244507361200344400ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.util.Collection; import java.util.Map; public class MapMethods { public static V getAfterInsertingIfAbsent(Map map, K key, Factory valueFactory) { V value = map.get(key); if (value == null) { value = valueFactory.newInstance(); map.put(key, value); } return value; } public static > void addToMultiMap(Map map, K key, V value, Factory valueFactory) { Collection collection = getAfterInsertingIfAbsent(map, key, valueFactory); collection.add(value); } public static > boolean removeFromMultiMap(Map map, K key, V value) { Collection collection = map.get(key); if (collection == null) { return false; } if (!collection.remove(value)) { return false; } else { if (collection.isEmpty()) { map.remove(key); } return true; } } } PersistRequest.java000066400000000000000000000035341244507361200354040ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.util.Callback; /** * Encapsulates a request to persist a given message on a given topic. The * request is completed asynchronously, callback and context are provided * */ public class PersistRequest { ByteString topic; Message message; private Callback callback; Object ctx; public PersistRequest(ByteString topic, Message message, Callback callback, Object ctx) { this.topic = topic; this.message = message; this.callback = callback; this.ctx = ctx; } public ByteString getTopic() { return topic; } public Message getMessage() { return message; } public Callback getCallback() { return callback; } public Object getCtx() { return ctx; } } PersistenceManager.java000066400000000000000000000076031244507361200361620ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException.ServerNotResponsibleForTopicException; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; /** * An implementation of this interface will persist messages in order and assign * a seqId to each persisted message. SeqId need not be a single number in * general. SeqId is opaque to all layers above {@link PersistenceManager}. Only * the {@link PersistenceManager} needs to understand the format of the seqId * and maintain it in such a way that there is a total order on the seqIds of a * topic. * */ public interface PersistenceManager { /** * Executes the given persist request asynchronously. When done, the * callback specified in the request object is called with the result of the * operation set to the {@link LocalMessageSeqId} assigned to the persisted * message. */ public void persistMessage(PersistRequest request); /** * Get the seqId of the last message that has been persisted to the given * topic. The returned seqId will be set as the consume position of any * brand new subscription on this topic. * * Note that the return value may quickly become invalid because a * {@link #persistMessage(String, PublishedMessage)} call from another * thread succeeds. For us, the typical use case is choosing the consume * position of a new subscriber. Since the subscriber need not receive all * messages that are published while the subscribe call is in progress, such * loose semantics from this method is acceptable. * * @param topic * @return the seqId of the last persisted message. * @throws ServerNotResponsibleForTopicException */ public MessageSeqId getCurrentSeqIdForTopic(ByteString topic) throws ServerNotResponsibleForTopicException; /** * Executes the given scan request * */ public void scanSingleMessage(ScanRequest request); /** * Gets the next seq-id. This method should never block. */ public long getSeqIdAfterSkipping(ByteString topic, long seqId, int skipAmount); /** * Hint that the messages until the given seqId have been delivered and wont * be needed unless there is a failure of some kind */ public void deliveredUntil(ByteString topic, Long seqId); /** * Hint that the messages until the given seqId have been consumed by all * subscribers to the topic and no longer need to be stored. The * implementation classes can decide how and if they want to garbage collect * and delete these older topic messages that are no longer needed. * * @param topic * Topic * @param seqId * Message local sequence ID */ public void consumedUntil(ByteString topic, Long seqId); public void setMessageBound(ByteString topic, Integer bound); public void clearMessageBound(ByteString topic); public void consumeToBound(ByteString topic); /** * Stop persistence manager. */ public void stop(); } PersistenceManagerWithRangeScan.java000066400000000000000000000020701244507361200405710ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; public interface PersistenceManagerWithRangeScan extends PersistenceManager { /** * Executes the given range scan request * * @param request */ public void scanMessages(RangeScanRequest request); } RangeScanRequest.java000066400000000000000000000047761244507361200356250ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import com.google.protobuf.ByteString; /** * Encapsulates a request to scan messages on the given topic starting from the * given seqId (included). A call-back {@link ScanCallback} is provided. As * messages are scanned, the relevant methods of the {@link ScanCallback} are * called. Two hints are provided as to when scanning should stop: in terms of * number of messages scanned, or in terms of the total size of messages * scanned. Scanning stops whenever one of these limits is exceeded. These * checks, especially the one about message size, are only approximate. The * {@link ScanCallback} used should be prepared to deal with more or less * messages scanned. If an error occurs during scanning, the * {@link ScanCallback} is notified of the error. * */ public class RangeScanRequest { ByteString topic; long startSeqId; int messageLimit; long sizeLimit; ScanCallback callback; Object ctx; public RangeScanRequest(ByteString topic, long startSeqId, int messageLimit, long sizeLimit, ScanCallback callback, Object ctx) { this.topic = topic; this.startSeqId = startSeqId; this.messageLimit = messageLimit; this.sizeLimit = sizeLimit; this.callback = callback; this.ctx = ctx; } public ByteString getTopic() { return topic; } public long getStartSeqId() { return startSeqId; } public int getMessageLimit() { return messageLimit; } public long getSizeLimit() { return sizeLimit; } public ScanCallback getCallback() { return callback; } public Object getCtx() { return ctx; } } ReadAheadCache.java000066400000000000000000000765711244507361200351370ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedList; import java.util.Queue; import java.util.Set; import java.util.SortedMap; import java.util.SortedSet; import java.util.TreeMap; import java.util.TreeSet; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.RejectedExecutionException; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import org.apache.bookkeeper.util.MathUtils; import org.apache.bookkeeper.util.OrderedSafeExecutor; import org.apache.bookkeeper.util.SafeRunnable; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ServerNotResponsibleForTopicException; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.jmx.HedwigJMXService; import org.apache.hedwig.server.jmx.HedwigMBeanInfo; import org.apache.hedwig.server.jmx.HedwigMBeanRegistry; import org.apache.hedwig.util.Callback; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; public class ReadAheadCache implements PersistenceManager, HedwigJMXService { static Logger logger = LoggerFactory.getLogger(ReadAheadCache.class); protected interface CacheRequest { public void performRequest(); } /** * The underlying persistence manager that will be used for persistence and * scanning below the cache */ protected PersistenceManagerWithRangeScan realPersistenceManager; /** * The structure for the cache */ protected ConcurrentMap cache = new ConcurrentHashMap(); /** * We also want to track the entries in seq-id order so that we can clean up * entries after the last subscriber */ protected ConcurrentMap> orderedIndexOnSeqId = new ConcurrentHashMap>(); /** * Partition Cache into Serveral Segments for simplify synchronization. * Each segment maintains its time index and segment size. */ static class CacheSegment { /** * We want to keep track of when entries were added in the cache, so that we * can remove them in a FIFO fashion */ protected SortedMap> timeIndexOfAddition = new TreeMap>(); /** * We maintain an estimate of the current size of each cache segment, * so that the thread know when to evict entries from cache segment. */ protected AtomicLong presentSegmentSize = new AtomicLong(0); } /** * We maintain an estimate of the current size of the cache, so that we know * when to evict entries. */ protected AtomicLong presentCacheSize = new AtomicLong(0); /** * Num pending requests. */ protected AtomicInteger numPendingRequests = new AtomicInteger(0); /** * Cache segment for different threads */ protected final ThreadLocal cacheSegment = new ThreadLocal() { @Override protected CacheSegment initialValue() { return new CacheSegment(); } }; /** * One instance of a callback that we will pass to the underlying * persistence manager when asking it to persist messages */ protected PersistCallback persistCallbackInstance = new PersistCallback(); /** * 2 kinds of exceptions that we will use to signal error from readahead */ protected NoSuchSeqIdException noSuchSeqIdExceptionInstance = new NoSuchSeqIdException(); protected ReadAheadException readAheadExceptionInstance = new ReadAheadException(); protected ServerConfiguration cfg; // Boolean indicating if this thread should continue running. This is used // when we want to stop the thread during a PubSubServer shutdown. protected volatile boolean keepRunning = true; protected final OrderedSafeExecutor cacheWorkers; protected final int numCacheWorkers; protected volatile long maxSegmentSize; protected volatile long cacheEntryTTL; // JMX Beans ReadAheadCacheBean jmxCacheBean = null; /** * Constructor. Starts the cache maintainer thread * * @param realPersistenceManager */ public ReadAheadCache(PersistenceManagerWithRangeScan realPersistenceManager, ServerConfiguration cfg) { this.realPersistenceManager = realPersistenceManager; this.cfg = cfg; numCacheWorkers = cfg.getNumReadAheadCacheThreads(); cacheWorkers = new OrderedSafeExecutor(numCacheWorkers); reloadConf(cfg); } /** * Reload configuration * * @param conf * Server configuration object */ protected void reloadConf(ServerConfiguration cfg) { maxSegmentSize = cfg.getMaximumCacheSize() / numCacheWorkers; cacheEntryTTL = cfg.getCacheEntryTTL(); } public ReadAheadCache start() { return this; } /** * ======================================================================== * Methods of {@link PersistenceManager} that we will pass straight down to * the real persistence manager. */ @Override public long getSeqIdAfterSkipping(ByteString topic, long seqId, int skipAmount) { return realPersistenceManager.getSeqIdAfterSkipping(topic, seqId, skipAmount); } @Override public MessageSeqId getCurrentSeqIdForTopic(ByteString topic) throws ServerNotResponsibleForTopicException { return realPersistenceManager.getCurrentSeqIdForTopic(topic); } /** * ======================================================================== * Other methods of {@link PersistenceManager} that the cache needs to take * some action on. * * 1. Persist: We pass it through to the real persistence manager but insert * our callback on the return path * */ @Override public void persistMessage(PersistRequest request) { // make a new PersistRequest object so that we can insert our own // callback in the middle. Assign the original request as the context // for the callback. PersistRequest newRequest = new PersistRequest(request.getTopic(), request.getMessage(), persistCallbackInstance, request); realPersistenceManager.persistMessage(newRequest); } /** * The callback that we insert on the persist request return path. The * callback simply forms a {@link PersistResponse} object and inserts it in * the request queue to be handled serially by the cache maintainer thread. * */ public class PersistCallback implements Callback { /** * In case there is a failure in persisting, just pass it to the * original callback */ @Override public void operationFailed(Object ctx, PubSubException exception) { PersistRequest originalRequest = (PersistRequest) ctx; Callback originalCallback = originalRequest.getCallback(); Object originalContext = originalRequest.getCtx(); originalCallback.operationFailed(originalContext, exception); } /** * When the persist finishes, we first notify the original callback of * success, and then opportunistically treat the message as if it just * came in through a scan */ @Override public void operationFinished(Object ctx, PubSubProtocol.MessageSeqId resultOfOperation) { PersistRequest originalRequest = (PersistRequest) ctx; // Lets call the original callback first so that the publisher can // hear success originalRequest.getCallback().operationFinished(originalRequest.getCtx(), resultOfOperation); // Original message that was persisted didn't have the local seq-id. // Lets add that in Message messageWithLocalSeqId = MessageIdUtils.mergeLocalSeqId(originalRequest.getMessage(), resultOfOperation.getLocalComponent()); // Now enqueue a request to add this newly persisted message to our // cache CacheKey cacheKey = new CacheKey(originalRequest.getTopic(), resultOfOperation.getLocalComponent()); enqueueWithoutFailureByTopic(cacheKey.getTopic(), new ScanResponse(cacheKey, messageWithLocalSeqId)); } } protected void enqueueWithoutFailureByTopic(ByteString topic, final CacheRequest obj) { if (!keepRunning) { return; } try { numPendingRequests.incrementAndGet(); cacheWorkers.submitOrdered(topic, new SafeRunnable() { @Override public void safeRun() { numPendingRequests.decrementAndGet(); obj.performRequest(); } }); } catch (RejectedExecutionException ree) { logger.error("Failed to submit cache request for topic " + topic.toStringUtf8() + " : ", ree); } } /** * Another method from {@link PersistenceManager}. * * 2. Scan - Since the scan needs to touch the cache, we will just enqueue * the scan request and let the cache maintainer thread handle it. */ @Override public void scanSingleMessage(ScanRequest request) { // Let the scan requests be serialized through the queue enqueueWithoutFailureByTopic(request.getTopic(), new ScanRequestWrapper(request)); } /** * Another method from {@link PersistenceManager}. * * 3. Enqueue the request so that the cache maintainer thread can delete all * message-ids older than the one specified */ @Override public void deliveredUntil(ByteString topic, Long seqId) { enqueueWithoutFailureByTopic(topic, new DeliveredUntil(topic, seqId)); } /** * Another method from {@link PersistenceManager}. * * Since this is a cache layer on top of an underlying persistence manager, * we can just call the consumedUntil method there. The messages older than * the latest one passed here won't be accessed anymore so they should just * get aged out of the cache eventually. For now, there is no need to * proactively remove those entries from the cache. */ @Override public void consumedUntil(ByteString topic, Long seqId) { realPersistenceManager.consumedUntil(topic, seqId); } @Override public void setMessageBound(ByteString topic, Integer bound) { realPersistenceManager.setMessageBound(topic, bound); } @Override public void clearMessageBound(ByteString topic) { realPersistenceManager.clearMessageBound(topic); } @Override public void consumeToBound(ByteString topic) { realPersistenceManager.consumeToBound(topic); } /** * Stop the readahead cache. */ @Override public void stop() { try { keepRunning = false; cacheWorkers.shutdown(); } catch (Exception e) { logger.warn("Failed to shut down cache workers : ", e); } } /** * The readahead policy is simple: We check if an entry already exists for * the message being requested. If an entry exists, it means that either * that message is already in the cache, or a read for that message is * outstanding. In that case, we look a little ahead (by readAheadCount/2) * and issue a range read of readAheadCount/2 messages. The idea is to * ensure that the next readAheadCount messages are always available. * * @return the range scan that should be issued for read ahead */ protected RangeScanRequest doReadAhead(ScanRequest request) { ByteString topic = request.getTopic(); Long seqId = request.getStartSeqId(); int readAheadCount = cfg.getReadAheadCount(); // To prevent us from getting screwed by bad configuration readAheadCount = Math.max(1, readAheadCount); RangeScanRequest readAheadRequest = doReadAheadStartingFrom(topic, seqId, readAheadCount); if (readAheadRequest != null) { return readAheadRequest; } // start key was already there in the cache so no readahead happened, // lets look a little beyond seqId = realPersistenceManager.getSeqIdAfterSkipping(topic, seqId, readAheadCount / 2); readAheadRequest = doReadAheadStartingFrom(topic, seqId, readAheadCount / 2); return readAheadRequest; } /** * This method just checks if the provided seq-id already exists in the * cache. If not, a range read of the specified amount is issued. * * @param topic * @param seqId * @param readAheadCount * @return The range read that should be issued */ protected RangeScanRequest doReadAheadStartingFrom(ByteString topic, long seqId, int readAheadCount) { long startSeqId = seqId; Queue installedStubs = new LinkedList(); int i = 0; for (; i < readAheadCount; i++) { CacheKey cacheKey = new CacheKey(topic, seqId); // Even if a stub exists, it means that a scan for that is // outstanding if (cache.containsKey(cacheKey)) { break; } CacheValue cacheValue = new CacheValue(); if (null != cache.putIfAbsent(cacheKey, cacheValue)) { logger.warn("It is unexpected that more than one threads are adding message to cache key {}" +" at the same time.", cacheKey); } logger.debug("Adding cache stub for: {}", cacheKey); installedStubs.add(cacheKey); seqId = realPersistenceManager.getSeqIdAfterSkipping(topic, seqId, 1); } // so how many did we decide to readahead if (i == 0) { // no readahead, hence return false return null; } long readAheadSizeLimit = cfg.getReadAheadSizeBytes(); ReadAheadScanCallback callback = new ReadAheadScanCallback(installedStubs, topic); RangeScanRequest rangeScanRequest = new RangeScanRequest(topic, startSeqId, i, readAheadSizeLimit, callback, null); return rangeScanRequest; } /** * This is the callback that is used for the range scans. */ protected class ReadAheadScanCallback implements ScanCallback { Queue installedStubs; ByteString topic; /** * Constructor * * @param installedStubs * The list of stubs that were installed for this range scan * @param topic */ public ReadAheadScanCallback(Queue installedStubs, ByteString topic) { this.installedStubs = installedStubs; this.topic = topic; } @Override public void messageScanned(Object ctx, Message message) { // Any message we read is potentially useful for us, so lets first // enqueue it CacheKey cacheKey = new CacheKey(topic, message.getMsgId().getLocalComponent()); enqueueWithoutFailureByTopic(topic, new ScanResponse(cacheKey, message)); // Now lets see if this message is the one we were expecting CacheKey expectedKey = installedStubs.peek(); if (expectedKey == null) { // Was not expecting any more messages to come in, but they came // in so we will keep them return; } if (expectedKey.equals(cacheKey)) { // what we got is what we expected, dequeue it so we get the // next expected one installedStubs.poll(); return; } // If reached here, what we scanned was not what we were expecting. // This means that we have wrong stubs installed in the cache. We // should remove them, so that whoever is waiting on them can retry. // This shouldn't be happening usually logger.warn("Unexpected message seq-id: " + message.getMsgId().getLocalComponent() + " on topic: " + topic.toStringUtf8() + " from readahead scan, was expecting seq-id: " + expectedKey.seqId + " topic: " + expectedKey.topic.toStringUtf8() + " installedStubs: " + installedStubs); enqueueDeleteOfRemainingStubs(noSuchSeqIdExceptionInstance); } @Override public void scanFailed(Object ctx, Exception exception) { enqueueDeleteOfRemainingStubs(exception); } @Override public void scanFinished(Object ctx, ReasonForFinish reason) { // If the scan finished because no more messages are present, its ok // to leave the stubs in place because they will get filled in as // new publishes happen. However, if the scan finished due to some // other reason, e.g., read ahead size limit was reached, we want to // delete the stubs, so that when the time comes, we can schedule // another readahead request. if (reason != ReasonForFinish.NO_MORE_MESSAGES) { enqueueDeleteOfRemainingStubs(readAheadExceptionInstance); } } private void enqueueDeleteOfRemainingStubs(Exception reason) { CacheKey installedStub; while ((installedStub = installedStubs.poll()) != null) { enqueueWithoutFailureByTopic(installedStub.getTopic(), new ExceptionOnCacheKey(installedStub, reason)); } } } protected static class HashSetCacheKeyFactory implements Factory> { protected final static HashSetCacheKeyFactory instance = new HashSetCacheKeyFactory(); @Override public Set newInstance() { return new HashSet(); } } protected static class TreeSetLongFactory implements Factory> { protected final static TreeSetLongFactory instance = new TreeSetLongFactory(); @Override public SortedSet newInstance() { return new TreeSet(); } } /** * For adding the message to the cache, we do some bookeeping such as the * total size of cache, order in which entries were added etc. If the size * of the cache has exceeded our budget, old entries are collected. * * @param cacheKey * @param message */ protected void addMessageToCache(final CacheKey cacheKey, final Message message, final long currTime) { logger.debug("Adding msg {} to readahead cache", cacheKey); CacheValue cacheValue; if ((cacheValue = cache.get(cacheKey)) == null) { cacheValue = new CacheValue(); CacheValue oldValue = cache.putIfAbsent(cacheKey, cacheValue); if (null != oldValue) { logger.warn("Weird! Should not have two threads adding message to cache key {} at the same time.", cacheKey); cacheValue = oldValue; } } CacheSegment segment = cacheSegment.get(); if (cacheValue.isStub()) { // update cache size only when cache value is a stub int size = message.getBody().size(); // update the cache size segment.presentSegmentSize.addAndGet(size); presentCacheSize.addAndGet(size); } synchronized (cacheValue) { // finally add the message to the cache cacheValue.setMessageAndInvokeCallbacks(message, currTime); } // maintain the index of seq-id // no lock since threads are partitioned by topics MapMethods.addToMultiMap(orderedIndexOnSeqId, cacheKey.getTopic(), cacheKey.getSeqId(), TreeSetLongFactory.instance); // maintain the time index of addition MapMethods.addToMultiMap(segment.timeIndexOfAddition, currTime, cacheKey, HashSetCacheKeyFactory.instance); collectOldOrExpiredCacheEntries(segment); } protected void removeMessageFromCache(final CacheKey cacheKey, Exception exception, final boolean maintainTimeIndex, final boolean maintainSeqIdIndex) { CacheValue cacheValue = cache.remove(cacheKey); if (cacheValue == null) { return; } CacheSegment segment = cacheSegment.get(); long timeOfAddition = 0; synchronized (cacheValue) { if (cacheValue.isStub()) { cacheValue.setErrorAndInvokeCallbacks(exception); // Stubs are not present in the indexes, so don't need to maintain // indexes here return; } int size = 0 - cacheValue.getMessage().getBody().size(); presentCacheSize.addAndGet(size); segment.presentSegmentSize.addAndGet(size); timeOfAddition = cacheValue.getTimeOfAddition(); } if (maintainSeqIdIndex) { MapMethods.removeFromMultiMap(orderedIndexOnSeqId, cacheKey.getTopic(), cacheKey.getSeqId()); } if (maintainTimeIndex) { MapMethods.removeFromMultiMap(segment.timeIndexOfAddition, timeOfAddition, cacheKey); } } /** * Collection of old entries is simple. Just collect in insert-time order, * oldest to newest. */ protected void collectOldOrExpiredCacheEntries(CacheSegment segment) { if (cacheEntryTTL > 0) { // clear expired entries while (!segment.timeIndexOfAddition.isEmpty()) { Long earliestTime = segment.timeIndexOfAddition.firstKey(); if (MathUtils.now() - earliestTime < cacheEntryTTL) { break; } collectCacheEntriesAtTimestamp(segment, earliestTime); } } while (segment.presentSegmentSize.get() > maxSegmentSize && !segment.timeIndexOfAddition.isEmpty()) { Long earliestTime = segment.timeIndexOfAddition.firstKey(); collectCacheEntriesAtTimestamp(segment, earliestTime); } } private void collectCacheEntriesAtTimestamp(CacheSegment segment, long timestamp) { Set oldCacheEntries = segment.timeIndexOfAddition.get(timestamp); // Note: only concrete cache entries, and not stubs are in the time // index. Hence there can be no callbacks pending on these cache // entries. Hence safe to remove them directly. for (Iterator iter = oldCacheEntries.iterator(); iter.hasNext();) { final CacheKey cacheKey = iter.next(); logger.debug("Removing {} from cache because it's the oldest.", cacheKey); removeMessageFromCache(cacheKey, readAheadExceptionInstance, // // maintainTimeIndex= false, // maintainSeqIdIndex= true); } segment.timeIndexOfAddition.remove(timestamp); } /** * ======================================================================== * The rest is just simple wrapper classes. * */ protected class ExceptionOnCacheKey implements CacheRequest { CacheKey cacheKey; Exception exception; public ExceptionOnCacheKey(CacheKey cacheKey, Exception exception) { this.cacheKey = cacheKey; this.exception = exception; } /** * If for some reason, an outstanding read on a cache stub fails, * exception for that key is enqueued by the * {@link ReadAheadScanCallback}. To handle this, we simply send error * on the callbacks registered for that stub, and delete the entry from * the cache */ @Override public void performRequest() { removeMessageFromCache(cacheKey, exception, // maintainTimeIndex= true, // maintainSeqIdIndex= true); } } @SuppressWarnings("serial") protected static class NoSuchSeqIdException extends Exception { public NoSuchSeqIdException() { super("No such seq-id"); } } @SuppressWarnings("serial") protected static class ReadAheadException extends Exception { public ReadAheadException() { super("Readahead failed"); } } public class CancelScanRequestOp implements CacheRequest { final CancelScanRequest request; public CancelScanRequestOp(CancelScanRequest request) { this.request = request; } @Override public void performRequest() { // cancel scan request cancelScanRequest(request.getScanRequest()); } void cancelScanRequest(ScanRequest request) { if (null == request) { // nothing to cancel return; } CacheKey cacheKey = new CacheKey(request.getTopic(), request.getStartSeqId()); CacheValue cacheValue = cache.get(cacheKey); if (null == cacheValue) { // cache value is evicted // so it's callback would be called, we don't need to worry about // cancel it. since it was treated as executed. return; } cacheValue.removeCallback(request.getCallback(), request.getCtx()); } } public void cancelScanRequest(ByteString topic, CancelScanRequest request) { enqueueWithoutFailureByTopic(topic, new CancelScanRequestOp(request)); } protected class ScanResponse implements CacheRequest { CacheKey cacheKey; Message message; public ScanResponse(CacheKey cacheKey, Message message) { this.cacheKey = cacheKey; this.message = message; } @Override public void performRequest() { addMessageToCache(cacheKey, message, MathUtils.now()); } } protected class DeliveredUntil implements CacheRequest { ByteString topic; Long seqId; public DeliveredUntil(ByteString topic, Long seqId) { this.topic = topic; this.seqId = seqId; } @Override public void performRequest() { SortedSet orderedSeqIds = orderedIndexOnSeqId.get(topic); if (orderedSeqIds == null) { return; } // focus on the set of messages with seq-ids <= the one that // has been delivered until SortedSet headSet = orderedSeqIds.headSet(seqId + 1); for (Iterator iter = headSet.iterator(); iter.hasNext();) { Long seqId = iter.next(); CacheKey cacheKey = new CacheKey(topic, seqId); logger.debug("Removing {} from cache because every subscriber has moved past", cacheKey); removeMessageFromCache(cacheKey, readAheadExceptionInstance, // // maintainTimeIndex= true, // maintainSeqIdIndex= false); iter.remove(); } if (orderedSeqIds.isEmpty()) { orderedIndexOnSeqId.remove(topic); } } } protected class ScanRequestWrapper implements CacheRequest { ScanRequest request; public ScanRequestWrapper(ScanRequest request) { this.request = request; } /** * To handle a scan request, we first try to do readahead (which might * cause a range read to be issued to the underlying persistence * manager). The readahead will put a stub in the cache, if the message * is not already present in the cache. The scan callback that is part * of the scan request is added to this stub, and will be called later * when the message arrives as a result of the range scan issued to the * underlying persistence manager. */ @Override public void performRequest() { RangeScanRequest readAheadRequest = doReadAhead(request); // Read ahead must have installed at least a stub for us, so this // can't be null CacheKey cacheKey = new CacheKey(request.getTopic(), request.getStartSeqId()); CacheValue cacheValue = cache.get(cacheKey); if (null == cacheValue) { logger.error("Cache key {} is removed after installing stub when scanning.", cacheKey); // reissue the request scanSingleMessage(request); return; } synchronized (cacheValue) { // Add our callback to the stub. If the cache value was already a // concrete message, the callback will be called right away cacheValue.addCallback(request.getCallback(), request.getCtx()); } if (readAheadRequest != null) { realPersistenceManager.scanMessages(readAheadRequest); } } } @Override public void registerJMX(HedwigMBeanInfo parent) { try { jmxCacheBean = new ReadAheadCacheBean(this); HedwigMBeanRegistry.getInstance().register(jmxCacheBean, parent); } catch (Exception e) { logger.warn("Failed to register readahead cache with JMX", e); jmxCacheBean = null; } } @Override public void unregisterJMX() { try { if (jmxCacheBean != null) { HedwigMBeanRegistry.getInstance().unregister(jmxCacheBean); } } catch (Exception e) { logger.warn("Failed to unregister readahead cache with JMX", e); } } } ReadAheadCacheBean.java000066400000000000000000000033021244507361200357030ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import org.apache.hedwig.server.jmx.HedwigMBeanInfo; /** * Read Ahead Cache Bean */ public class ReadAheadCacheBean implements ReadAheadCacheMXBean, HedwigMBeanInfo { ReadAheadCache cache; public ReadAheadCacheBean(ReadAheadCache cache) { this.cache = cache; } @Override public String getName() { return "ReadAheadCache"; } @Override public boolean isHidden() { return false; } @Override public long getMaxCacheSize() { return cache.cfg.getMaximumCacheSize(); } @Override public long getPresentCacheSize() { return cache.presentCacheSize.get(); } @Override public int getNumCachedEntries() { return cache.cache.size(); } @Override public int getNumPendingCacheRequests() { return cache.numPendingRequests.get(); } } ReadAheadCacheMXBean.java000066400000000000000000000024341244507361200361550ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; /** * Read Ahead Cache MBean */ public interface ReadAheadCacheMXBean { /** * @return max cache size */ public long getMaxCacheSize(); /** * @return present cache size */ public long getPresentCacheSize(); /** * @return number of cached entries */ public int getNumCachedEntries(); /** * @return number of pending cache requests */ public int getNumPendingCacheRequests(); } ScanCallback.java000066400000000000000000000042361244507361200347030ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import org.apache.hedwig.protocol.PubSubProtocol.Message; public interface ScanCallback { enum ReasonForFinish { NO_MORE_MESSAGES, SIZE_LIMIT_EXCEEDED, NUM_MESSAGES_LIMIT_EXCEEDED }; /** * This method is called when a message is read from the persistence layer * as part of a scan. The message just read is handed to this listener which * can then take the desired action on it. The return value from the method * indicates whether the scan should continue or not. * * @param ctx * The context for the callback * @param message * The message just scanned from the log * @return true if the scan should continue, false otherwise */ public void messageScanned(Object ctx, Message message); /** * This method is called when the scan finishes * * * @param ctx * @param reason */ public abstract void scanFinished(Object ctx, ReasonForFinish reason); /** * This method is called when the operation failed due to some reason. The * reason for failure is passed in. * * @param ctx * The context for the callback * @param exception * The reason for the failure of the scan */ public abstract void scanFailed(Object ctx, Exception exception); } ScanCallbackWithContext.java000066400000000000000000000032431244507361200371010ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; public class ScanCallbackWithContext { ScanCallback scanCallback; Object ctx; public ScanCallbackWithContext(ScanCallback callback, Object ctx) { this.scanCallback = callback; this.ctx = ctx; } public ScanCallback getScanCallback() { return scanCallback; } public Object getCtx() { return ctx; } @Override public boolean equals(Object other) { if (!(other instanceof ScanCallbackWithContext)) { return false; } ScanCallbackWithContext otherCb = (ScanCallbackWithContext) other; // Ensure that it was same callback & same ctx return scanCallback == otherCb.scanCallback && ctx == otherCb.ctx; } @Override public int hashCode() { return scanCallback.hashCode(); } } ScanRequest.java000066400000000000000000000041151244507361200346330ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.Message; /** * Encapsulates a request for reading a single message. The message on the given * topic at the given seqId is scanned. A call-back {@link ScanCallback} * is provided. When the message is scanned, the * {@link ScanCallback#messageScanned(Object, Message)} method is called. Since * there is only 1 record to be scanned the * {@link ScanCallback#operationFinished(Object)} method may not be called since * its redundant. * {@link ScanCallback#scanFailed(Object, org.apache.hedwig.exceptions.PubSubException)} * method is called in case of error. * */ public class ScanRequest { ByteString topic; long startSeqId; ScanCallback callback; Object ctx; public ScanRequest(ByteString topic, long startSeqId, ScanCallback callback, Object ctx) { this.topic = topic; this.startSeqId = startSeqId; this.callback = callback; this.ctx = ctx; } public ByteString getTopic() { return topic; } public long getStartSeqId() { return startSeqId; } public ScanCallback getCallback() { return callback; } public Object getCtx() { return ctx; } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/000077500000000000000000000000001244507361200304465ustar00rootroot00000000000000ChannelTracker.java000066400000000000000000000114721244507361200341230ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import java.util.HashMap; import java.util.LinkedList; import java.util.List; import org.jboss.netty.channel.Channel; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.TopicBusyException; import org.apache.hedwig.server.handlers.ChannelDisconnectListener; import org.apache.hedwig.util.Callback; public class ChannelTracker implements ChannelDisconnectListener { HashMap topicSub2Channel = new HashMap(); HashMap> channel2TopicSubs = new HashMap>(); Subscriber subscriber; public ChannelTracker(Subscriber subscriber) { this.subscriber = subscriber; } static Callback noOpCallback = new Callback() { public void operationFailed(Object ctx, PubSubException exception) { }; public void operationFinished(Object ctx, Void resultOfOperation) { }; }; public synchronized void channelDisconnected(Channel channel) { List topicSubs = channel2TopicSubs.remove(channel); if (topicSubs == null) { return; } for (TopicSubscriber topicSub : topicSubs) { topicSub2Channel.remove(topicSub); subscriber.asyncCloseSubscription(topicSub.getTopic(), topicSub.getSubscriberId(), noOpCallback, null); } } public synchronized void subscribeSucceeded(TopicSubscriber topicSubscriber, Channel channel) throws TopicBusyException { if (!channel.isConnected()) { // channel got disconnected while we were processing the // subscribe request, nothing much we can do in this case return; } if (topicSub2Channel.containsKey(topicSubscriber)) { TopicBusyException pse = new PubSubException.TopicBusyException( "subscription for this topic, subscriberId is already being served on a different channel"); throw pse; } topicSub2Channel.put(topicSubscriber, channel); List topicSubs = channel2TopicSubs.get(channel); if (topicSubs == null) { topicSubs = new LinkedList(); channel2TopicSubs.put(channel, topicSubs); } topicSubs.add(topicSubscriber); } public void aboutToCloseSubscription(ByteString topic, ByteString subscriberId) { removeSubscriber(topic, subscriberId); } public void aboutToUnsubscribe(ByteString topic, ByteString subscriberId) { removeSubscriber(topic, subscriberId); } private synchronized void removeSubscriber(ByteString topic, ByteString subscriberId) { TopicSubscriber topicSub = new TopicSubscriber(topic, subscriberId); Channel channel = topicSub2Channel.remove(topicSub); if (channel != null) { List topicSubs = channel2TopicSubs.get(channel); if (topicSubs != null) { topicSubs.remove(topicSub); } } } public synchronized void checkChannelMatches(ByteString topic, ByteString subscriberId, Channel channel) throws PubSubException { Channel subscribedChannel = getChannel(topic, subscriberId); if (subscribedChannel == null) { throw new PubSubException.ClientNotSubscribedException( "Can't start delivery since client is not subscribed"); } if (subscribedChannel != channel) { throw new PubSubException.TopicBusyException( "Can't start delivery since client is subscribed on a different channel"); } } public synchronized Channel getChannel(ByteString topic, ByteString subscriberId) { return topicSub2Channel.get(new TopicSubscriber(topic, subscriberId)); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/HedwigProxy.java000066400000000000000000000162241244507361200335670ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import java.io.File; import java.lang.Thread.UncaughtExceptionHandler; import java.net.InetSocketAddress; import java.net.MalformedURLException; import java.util.HashMap; import java.util.Map; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; import org.apache.commons.configuration.ConfigurationException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.bootstrap.ServerBootstrap; import org.jboss.netty.channel.group.ChannelGroup; import org.jboss.netty.channel.group.DefaultChannelGroup; import org.jboss.netty.channel.socket.ServerSocketChannelFactory; import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory; import org.jboss.netty.logging.InternalLoggerFactory; import org.jboss.netty.logging.Log4JLoggerFactory; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.server.common.TerminateJVMExceptionHandler; import org.apache.hedwig.server.handlers.ChannelDisconnectListener; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.netty.PubSubServer; import org.apache.hedwig.server.netty.PubSubServerPipelineFactory; import org.apache.hedwig.server.netty.UmbrellaHandler; public class HedwigProxy { static final Logger logger = LoggerFactory.getLogger(HedwigProxy.class); HedwigClient client; ServerSocketChannelFactory serverSocketChannelFactory; ChannelGroup allChannels; Map handlers; ProxyConfiguration cfg; ChannelTracker tracker; ThreadGroup tg; public HedwigProxy(final ProxyConfiguration cfg, final UncaughtExceptionHandler exceptionHandler) { this.cfg = cfg; tg = new ThreadGroup("hedwigproxy") { @Override public void uncaughtException(Thread t, Throwable e) { exceptionHandler.uncaughtException(t, e); } }; } public HedwigProxy(ProxyConfiguration conf) throws InterruptedException { this(conf, new TerminateJVMExceptionHandler()); } public void start() throws InterruptedException { final LinkedBlockingQueue queue = new LinkedBlockingQueue(); new Thread(tg, new Runnable() { @Override public void run() { client = new HedwigClient(cfg); serverSocketChannelFactory = new NioServerSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()); initializeHandlers(); initializeNetty(); queue.offer(true); } }).start(); queue.take(); } // used for testing public ChannelTracker getChannelTracker() { return tracker; } protected void initializeHandlers() { handlers = new HashMap(); tracker = new ChannelTracker(client.getSubscriber()); handlers.put(OperationType.PUBLISH, new ProxyPublishHander(client.getPublisher())); handlers.put(OperationType.SUBSCRIBE, new ProxySubscribeHandler(client.getSubscriber(), tracker)); handlers.put(OperationType.UNSUBSCRIBE, new ProxyUnsubscribeHandler(client.getSubscriber(), tracker)); handlers.put(OperationType.CONSUME, new ProxyConsumeHandler(client.getSubscriber())); handlers.put(OperationType.STOP_DELIVERY, new ProxyStopDeliveryHandler(client.getSubscriber(), tracker)); handlers.put(OperationType.START_DELIVERY, new ProxyStartDeliveryHandler(client.getSubscriber(), tracker)); handlers.put(OperationType.CLOSESUBSCRIPTION, new ProxyCloseSubscriptionHandler(client.getSubscriber(), tracker)); } protected void initializeNetty() { InternalLoggerFactory.setDefaultFactory(new Log4JLoggerFactory()); allChannels = new DefaultChannelGroup("hedwigproxy"); ServerBootstrap bootstrap = new ServerBootstrap(serverSocketChannelFactory); ChannelDisconnectListener disconnectListener = (ChannelDisconnectListener) handlers.get(OperationType.SUBSCRIBE); UmbrellaHandler umbrellaHandler = new UmbrellaHandler(allChannels, handlers, disconnectListener, false); PubSubServerPipelineFactory pipeline = new PubSubServerPipelineFactory(umbrellaHandler, null, cfg .getMaximumMessageSize()); bootstrap.setPipelineFactory(pipeline); bootstrap.setOption("child.tcpNoDelay", true); bootstrap.setOption("child.keepAlive", true); bootstrap.setOption("reuseAddress", true); // Bind and start to accept incoming connections. allChannels.add(bootstrap.bind(new InetSocketAddress(cfg.getProxyPort()))); logger.info("Going into receive loop"); } public void shutdown() { allChannels.close().awaitUninterruptibly(); client.close(); serverSocketChannelFactory.releaseExternalResources(); } // the following method only exists for unit-testing purposes, should go // away once we make start delivery totally server-side public Handler getStartDeliveryHandler() { return handlers.get(OperationType.START_DELIVERY); } public Handler getStopDeliveryHandler() { return handlers.get(OperationType.STOP_DELIVERY); } /** * @param args */ public static void main(String[] args) { logger.info("Attempting to start Hedwig Proxy"); ProxyConfiguration conf = new ProxyConfiguration(); if (args.length > 0) { String confFile = args[0]; try { conf.loadConf(new File(confFile).toURI().toURL()); } catch (MalformedURLException e) { String msg = "Could not open configuration file: " + confFile; PubSubServer.errorMsgAndExit(msg, e, PubSubServer.RC_INVALID_CONF_FILE); } catch (ConfigurationException e) { String msg = "Malformed configuration file: " + confFile; PubSubServer.errorMsgAndExit(msg, e, PubSubServer.RC_MISCONFIGURED); } logger.info("Using configuration file " + confFile); } try { new HedwigProxy(conf).start(); } catch (Throwable t) { PubSubServer.errorMsgAndExit("Error during startup", t, PubSubServer.RC_OTHER); } } } ProxyCloseSubscriptionHandler.java000066400000000000000000000054371244507361200372550ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.util.Callback; public class ProxyCloseSubscriptionHandler implements Handler { static final Logger logger = LoggerFactory.getLogger(ProxyCloseSubscriptionHandler.class); Subscriber subscriber; ChannelTracker tracker; public ProxyCloseSubscriptionHandler(Subscriber subscriber, ChannelTracker tracker) { this.subscriber = subscriber; this.tracker = tracker; } @Override public void handleRequest(final PubSubRequest request, final Channel channel) { if (!request.hasCloseSubscriptionRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing close subscription request data"); return; } final ByteString topic = request.getTopic(); final ByteString subscriberId = request.getCloseSubscriptionRequest().getSubscriberId(); subscriber.asyncCloseSubscription(topic, subscriberId, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); } @Override public void operationFinished(Object ctx, Void result) { tracker.aboutToCloseSubscription(topic, subscriberId); channel.write(PubSubResponseUtils.getSuccessResponse(request.getTxnId())); } }, null); } } ProxyConfiguration.java000066400000000000000000000024501244507361200351040ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import org.apache.hedwig.client.conf.ClientConfiguration; public class ProxyConfiguration extends ClientConfiguration { protected final static String PROXY_PORT = "proxy_port"; protected final static String MAX_MESSAGE_SIZE = "max_message_size"; public int getProxyPort() { return conf.getInt(PROXY_PORT, 9099); } @Override public int getMaximumMessageSize() { return conf.getInt(MAX_MESSAGE_SIZE, 1258291); /* 1.2M */ } } ProxyConsumeHandler.java000066400000000000000000000042721244507361200352100ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.protocol.PubSubProtocol.ConsumeRequest; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.netty.UmbrellaHandler; public class ProxyConsumeHandler implements Handler { static final Logger logger = LoggerFactory.getLogger(ProxyConsumeHandler.class); Subscriber subscriber; public ProxyConsumeHandler(Subscriber subscriber) { this.subscriber = subscriber; } @Override public void handleRequest(PubSubRequest request, Channel channel) { if (!request.hasConsumeRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing consume request data"); return; } ConsumeRequest consumeRequest = request.getConsumeRequest(); try { subscriber.consume(request.getTopic(), consumeRequest.getSubscriberId(), consumeRequest.getMsgId()); } catch (ClientNotSubscribedException e) { // ignore logger.warn("Unexpected consume request", e); } } } ProxyPublishHander.java000066400000000000000000000046271244507361200350350ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.PublishRequest; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.util.Callback; public class ProxyPublishHander implements Handler { Publisher publisher; public ProxyPublishHander(Publisher publisher) { this.publisher = publisher; } @Override public void handleRequest(final PubSubRequest request, final Channel channel) { if (!request.hasPublishRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing publish request data"); return; } final PublishRequest publishRequest = request.getPublishRequest(); publisher.asyncPublish(request.getTopic(), publishRequest.getMsg(), new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { channel.write(PubSubResponseUtils.getSuccessResponse(request.getTxnId())); } }, null); } } ProxyStartDeliveryHandler.java000066400000000000000000000132531244507361200363770ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelFutureListener; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.ProtocolVersion; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.util.Callback; public class ProxyStartDeliveryHandler implements Handler { static final Logger logger = LoggerFactory.getLogger(ProxyStartDeliveryHandler.class); Subscriber subscriber; ChannelTracker tracker; public ProxyStartDeliveryHandler(Subscriber subscriber, ChannelTracker tracker) { this.subscriber = subscriber; this.tracker = tracker; } @Override public void handleRequest(PubSubRequest request, Channel channel) { if (!request.hasStartDeliveryRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing start delivery request data"); return; } final ByteString topic = request.getTopic(); final ByteString subscriberId = request.getStartDeliveryRequest().getSubscriberId(); synchronized (tracker) { // try { // tracker.checkChannelMatches(topic, subscriberId, channel); // } catch (PubSubException e) { // channel.write(PubSubResponseUtils.getResponseForException(e, // request.getTxnId())); // return; // } final Channel subscribedChannel = tracker.getChannel(topic, subscriberId); if (subscribedChannel == null) { channel.write(PubSubResponseUtils.getResponseForException( new PubSubException.ClientNotSubscribedException("no subscription to start delivery on"), request.getTxnId())); return; } MessageHandler handler = new MessageHandler() { @Override public void deliver(ByteString topic, ByteString subscriberId, Message msg, final Callback callback, final Object context) { PubSubResponse response = PubSubResponse.newBuilder().setProtocolVersion( ProtocolVersion.VERSION_ONE).setStatusCode(StatusCode.SUCCESS).setTxnId(0).setMessage(msg) .setTopic(topic).setSubscriberId(subscriberId).build(); ChannelFuture future = subscribedChannel.write(response); future.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (!future.isSuccess()) { // ignoring this failure, because this will // only happen due to channel disconnect. // Channel disconnect will in turn stop // delivery, and stop these errors return; } // Tell the hedwig client, that it can send me // more messages callback.operationFinished(context, null); } }); } }; channel.write(PubSubResponseUtils.getSuccessResponse(request.getTxnId())); try { subscriber.startDelivery(topic, subscriberId, handler); } catch (ClientNotSubscribedException e) { // This should not happen, since we already checked the correct // channel and so on logger.error("Unexpected: No subscription when attempting to start delivery", e); throw new RuntimeException(e); } catch (AlreadyStartDeliveryException e) { logger.error("Unexpected: Already start delivery when attempting to start delivery", e); throw new RuntimeException(e); } } } } ProxyStopDeliveryHandler.java000066400000000000000000000054771244507361200362400ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.netty.UmbrellaHandler; public class ProxyStopDeliveryHandler implements Handler { static final Logger logger = LoggerFactory.getLogger(ProxyStopDeliveryHandler.class); Subscriber subscriber; ChannelTracker tracker; public ProxyStopDeliveryHandler(Subscriber subscriber, ChannelTracker tracker) { this.subscriber = subscriber; this.tracker = tracker; } @Override public void handleRequest(PubSubRequest request, Channel channel) { if (!request.hasStopDeliveryRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing stop delivery request data"); return; } final ByteString topic = request.getTopic(); final ByteString subscriberId = request.getStopDeliveryRequest().getSubscriberId(); synchronized (tracker) { try { tracker.checkChannelMatches(topic, subscriberId, channel); } catch (PubSubException e) { // intentionally ignore this error, since stop delivery doesn't // send back a response return; } try { subscriber.stopDelivery(topic, subscriberId); } catch (ClientNotSubscribedException e) { // This should not happen, since we already checked the correct // channel and so on logger.warn("Unexpected: No subscription when attempting to stop delivery", e); } } } } ProxySubscribeHandler.java000066400000000000000000000066651244507361200355300ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.jboss.netty.channel.Channel; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.TopicBusyException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.handlers.ChannelDisconnectListener; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.util.Callback; public class ProxySubscribeHandler implements Handler, ChannelDisconnectListener { static final Logger logger = LoggerFactory.getLogger(ProxySubscribeHandler.class); Subscriber subscriber; ChannelTracker tracker; public ProxySubscribeHandler(Subscriber subscriber, ChannelTracker tracker) { this.subscriber = subscriber; this.tracker = tracker; } @Override public void channelDisconnected(Channel channel) { tracker.channelDisconnected(channel); } @Override public void handleRequest(final PubSubRequest request, final Channel channel) { if (!request.hasSubscribeRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing subscribe request data"); return; } SubscribeRequest subRequest = request.getSubscribeRequest(); final TopicSubscriber topicSubscriber = new TopicSubscriber(request.getTopic(), subRequest.getSubscriberId()); subscriber.asyncSubscribe(topicSubscriber.getTopic(), subRequest.getSubscriberId(), subRequest .getCreateOrAttach(), new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { try { tracker.subscribeSucceeded(topicSubscriber, channel); } catch (TopicBusyException e) { channel.write(PubSubResponseUtils.getResponseForException(e, request.getTxnId())); return; } channel.write(PubSubResponseUtils.getSuccessResponse(request.getTxnId())); } }, null); } } ProxyUnsubscribeHandler.java000066400000000000000000000056221244507361200360630ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/proxy/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.proxy; import org.jboss.netty.channel.Channel; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protoextensions.PubSubResponseUtils; import org.apache.hedwig.server.handlers.Handler; import org.apache.hedwig.server.netty.UmbrellaHandler; import org.apache.hedwig.util.Callback; public class ProxyUnsubscribeHandler implements Handler { Subscriber subscriber; ChannelTracker tracker; public ProxyUnsubscribeHandler(Subscriber subscriber, ChannelTracker tracker) { this.subscriber = subscriber; this.tracker = tracker; } @Override public void handleRequest(final PubSubRequest request, final Channel channel) { if (!request.hasUnsubscribeRequest()) { UmbrellaHandler.sendErrorResponseToMalformedRequest(channel, request.getTxnId(), "Missing unsubscribe request data"); return; } ByteString topic = request.getTopic(); ByteString subscriberId = request.getUnsubscribeRequest().getSubscriberId(); synchronized (tracker) { // Even if unsubscribe fails, the hedwig client closes the channel // on which the subscription is being served. Hence better to tell // the tracker beforehand that this subscription is no longer served tracker.aboutToUnsubscribe(topic, subscriberId); subscriber.asyncUnsubscribe(topic, subscriberId, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { channel.write(PubSubResponseUtils.getResponseForException(exception, request.getTxnId())); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { channel.write(PubSubResponseUtils.getSuccessResponse(request.getTxnId())); } }, null); } } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/regions/000077500000000000000000000000001244507361200307335ustar00rootroot00000000000000HedwigHubClient.java000066400000000000000000000040251244507361200345250ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/regions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.regions; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.netty.HedwigClientImpl; /** * This is a hub specific implementation of the HedwigClient. All this does * though is to override the HedwigSubscriber with the hub specific child class. * Creating this class so we can call the protected method in the parent to set * the subscriber since we don't want to expose that API to the public. */ public class HedwigHubClient extends HedwigClientImpl { // Constructor when we already have a ChannelFactory instantiated. public HedwigHubClient(ClientConfiguration cfg, ClientSocketChannelFactory channelFactory) { super(cfg, channelFactory); // Override the type of HedwigSubscriber with the hub specific one. setSubscriber(new HedwigHubSubscriber(this)); } // Constructor when we don't have a ChannelFactory. The super constructor // will create one for us. public HedwigHubClient(ClientConfiguration cfg) { super(cfg); // Override the type of HedwigSubscriber with the hub specific one. setSubscriber(new HedwigHubSubscriber(this)); } } HedwigHubClientFactory.java000066400000000000000000000061051244507361200360560ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/regions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.regions; import org.apache.commons.configuration.ConfigurationException; import org.jboss.netty.channel.socket.ClientSocketChannelFactory; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.HedwigSocketAddress; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class HedwigHubClientFactory { private final ServerConfiguration cfg; private final ClientConfiguration clientConfiguration; private final ClientSocketChannelFactory channelFactory; private static final Logger logger = LoggerFactory.getLogger(HedwigHubClientFactory.class); // Constructor that takes in a ServerConfiguration, ClientConfiguration and a ChannelFactory // so we can reuse it for all Clients created here. public HedwigHubClientFactory(ServerConfiguration cfg, ClientConfiguration clientConfiguration, ClientSocketChannelFactory channelFactory) { this.cfg = cfg; this.clientConfiguration = clientConfiguration; this.channelFactory = channelFactory; } /** * Manufacture a hub client whose default server to connect to is the input * HedwigSocketAddress hub. * * @param hub * The hub in another region to connect to. */ HedwigHubClient create(final HedwigSocketAddress hub) { // Create a hub specific version of the client to use ClientConfiguration hubClientConfiguration = new ClientConfiguration() { @Override protected HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return hub; } @Override public boolean isSSLEnabled() { return cfg.isInterRegionSSLEnabled() || clientConfiguration.isSSLEnabled(); } }; try { hubClientConfiguration.addConf(this.clientConfiguration.getConf()); } catch (ConfigurationException e) { String msg = "Configuration exception while loading the client configuration for the region manager."; logger.error(msg); throw new RuntimeException(msg); } return new HedwigHubClient(hubClientConfiguration, channelFactory); } } HedwigHubSubscriber.java000066400000000000000000000076761244507361200354310ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/regions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.regions; import com.google.protobuf.ByteString; import org.apache.hedwig.client.exceptions.InvalidSubscriberIdException; import org.apache.hedwig.client.netty.HedwigClientImpl; import org.apache.hedwig.client.netty.HedwigSubscriber; import org.apache.hedwig.exceptions.PubSubException.ClientAlreadySubscribedException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.exceptions.PubSubException.CouldNotConnectException; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.util.Callback; /** * This is a hub specific child class of the HedwigSubscriber. The main thing is * does is wrap the public subscribe/unsubscribe methods by calling the * overloaded protected ones passing in a true value for the input boolean * parameter isHub. That will just make sure we validate the subscriberId * passed, ensuring it is of the right format either for a local or hub * subscriber. */ public class HedwigHubSubscriber extends HedwigSubscriber { public HedwigHubSubscriber(HedwigClientImpl client) { super(client); } @Override public void subscribe(ByteString topic, ByteString subscriberId, CreateOrAttach mode) throws CouldNotConnectException, ClientAlreadySubscribedException, ServiceDownException, InvalidSubscriberIdException { SubscriptionOptions options = SubscriptionOptions.newBuilder().setCreateOrAttach(mode).build(); subscribe(topic, subscriberId, options); } @Override public void asyncSubscribe(ByteString topic, ByteString subscriberId, CreateOrAttach mode, Callback callback, Object context) { SubscriptionOptions options = SubscriptionOptions.newBuilder().setCreateOrAttach(mode).build(); asyncSubscribe(topic, subscriberId, options, callback, context); } @Override public void subscribe(ByteString topic, ByteString subscriberId, SubscriptionOptions options) throws CouldNotConnectException, ClientAlreadySubscribedException, ServiceDownException, InvalidSubscriberIdException { subscribe(topic, subscriberId, options, true); } @Override public void asyncSubscribe(ByteString topic, ByteString subscriberId, SubscriptionOptions options, Callback callback, Object context) { asyncSubscribe(topic, subscriberId, options, callback, context, true); } @Override public void unsubscribe(ByteString topic, ByteString subscriberId) throws CouldNotConnectException, ClientNotSubscribedException, ServiceDownException, InvalidSubscriberIdException { unsubscribe(topic, subscriberId, true); } @Override public void asyncUnsubscribe(final ByteString topic, final ByteString subscriberId, final Callback callback, final Object context) { asyncUnsubscribe(topic, subscriberId, callback, context, true); } } RegionManager.java000066400000000000000000000405611244507361200342430ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/regions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.regions; import java.util.ArrayList; import java.util.HashMap; import java.util.HashSet; import java.util.Set; import java.util.Timer; import java.util.TimerTask; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.CountDownLatch; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.ZooKeeper; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.netty.HedwigSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.RegionSpecificSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.common.TopicOpQueuer; import org.apache.hedwig.server.persistence.PersistRequest; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.subscriptions.SubscriptionEventListener; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.CallbackUtils; import org.apache.hedwig.util.HedwigSocketAddress; public class RegionManager implements SubscriptionEventListener { protected static final Logger LOGGER = LoggerFactory.getLogger(RegionManager.class); private final ByteString mySubId; private final PersistenceManager pm; private final ArrayList clients = new ArrayList(); private final TopicOpQueuer queue; private final String myRegion; // Timer for running a retry thread task to retry remote-subscription in asynchronous mode. private final Timer timer = new Timer(true); private final HashMap> retryMap = new HashMap>(); // map used to track whether a topic is remote subscribed or not private final ConcurrentMap topicStatuses = new ConcurrentHashMap(); /** * This is the Timer Task for retrying subscribing to remote regions */ class RetrySubscribeTask extends TimerTask { @Override public void run() { Set hubClients = new HashSet(); synchronized (retryMap) { hubClients.addAll(retryMap.keySet()); } if (hubClients.isEmpty()) { if (LOGGER.isDebugEnabled()) { LOGGER.debug("[" + myRegion + "] There is no hub client needs to retry subscriptions."); } return; } for (HedwigHubClient client : hubClients) { Set topics = null; synchronized (retryMap) { topics = retryMap.remove(client); } if (null == topics || topics.isEmpty()) { continue; } final CountDownLatch done = new CountDownLatch(1); Callback postCb = new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { finish(); } @Override public void operationFailed(Object ctx, PubSubException exception) { finish(); } void finish() { done.countDown(); } }; Callback mcb = CallbackUtils.multiCallback(topics.size(), postCb, null); for (ByteString topic : topics) { Boolean doRemoteSubscribe = topicStatuses.get(topic); // topic has been removed, no retry again if (null == doRemoteSubscribe) { mcb.operationFinished(null, null); continue; } retrySubscribe(client, topic, mcb); } try { done.await(); } catch (InterruptedException e) { LOGGER.warn("Exception during retrying remote subscriptions : ", e); } } } } public RegionManager(final PersistenceManager pm, final ServerConfiguration cfg, final ZooKeeper zk, ScheduledExecutorService scheduler, HedwigHubClientFactory hubClientFactory) { this.pm = pm; mySubId = ByteString.copyFromUtf8(SubscriptionStateUtils.HUB_SUBSCRIBER_PREFIX + cfg.getMyRegion()); queue = new TopicOpQueuer(scheduler); for (final String hub : cfg.getRegions()) { clients.add(hubClientFactory.create(new HedwigSocketAddress(hub))); } myRegion = cfg.getMyRegionByteString().toStringUtf8(); if (cfg.getRetryRemoteSubscribeThreadRunInterval() > 0) { timer.schedule(new RetrySubscribeTask(), 0, cfg.getRetryRemoteSubscribeThreadRunInterval()); } } private void putTopicInRetryMap(HedwigHubClient client, ByteString topic) { if (LOGGER.isDebugEnabled()) { LOGGER.debug("[" + myRegion + "] Put topic in retry map : " + topic.toStringUtf8()); } synchronized (retryMap) { Set topics = retryMap.get(client); if (null == topics) { topics = new HashSet(); retryMap.put(client, topics); } topics.add(topic); } } /** * Do remote subscribe for a specified topic. * * @param client * Hedwig Hub Client to subscribe remote topic. * @param topic * Topic to subscribe. * @param synchronous * Whether to wait for the callback of subscription. * @param mcb * Callback to trigger after subscription is done. * @param contex * Callback context */ private void doRemoteSubscribe(final HedwigHubClient client, final ByteString topic, final boolean synchronous, final Callback mcb, final Object context) { final HedwigSubscriber sub = client.getSubscriber(); try { if (sub.hasSubscription(topic, mySubId)) { if (LOGGER.isDebugEnabled()) { LOGGER.debug("[" + myRegion + "] cross-region subscription for topic " + topic.toStringUtf8() + " has existed before."); } mcb.operationFinished(null, null); return; } } catch (PubSubException e) { LOGGER.error("[" + myRegion + "] checking cross-region subscription for topic " + topic.toStringUtf8() + " failed (this is should not happen): ", e); mcb.operationFailed(context, e); return; } sub.asyncSubscribe(topic, mySubId, CreateOrAttach.CREATE_OR_ATTACH, new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { if (LOGGER.isDebugEnabled()) LOGGER.debug("[" + myRegion + "] cross-region subscription done for topic " + topic.toStringUtf8()); try { sub.startDelivery(topic, mySubId, new MessageHandler() { @Override public void deliver(final ByteString topic, ByteString subscriberId, Message msg, final Callback callback, final Object context) { // When messages are first published // locally, the PublishHandler sets the // source region in the Message. if (msg.hasSrcRegion()) { Message.newBuilder(msg).setMsgId( MessageSeqId.newBuilder(msg.getMsgId()).addRemoteComponents( RegionSpecificSeqId.newBuilder().setRegion( msg.getSrcRegion()).setSeqId( msg.getMsgId().getLocalComponent()))); } pm.persistMessage(new PersistRequest(topic, msg, new Callback() { @Override public void operationFinished(Object ctx, MessageSeqId resultOfOperation) { if (LOGGER.isDebugEnabled()) LOGGER.debug("[" + myRegion + "] cross-region recv-fwd succeeded for topic " + topic.toStringUtf8()); callback.operationFinished(context, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (LOGGER.isDebugEnabled()) LOGGER.error("[" + myRegion + "] cross-region recv-fwd failed for topic " + topic.toStringUtf8(), exception); callback.operationFailed(context, exception); } }, null)); } }); if (LOGGER.isDebugEnabled()) LOGGER.debug("[" + myRegion + "] cross-region start-delivery succeeded for topic " + topic.toStringUtf8()); mcb.operationFinished(ctx, null); } catch (PubSubException ex) { if (LOGGER.isDebugEnabled()) LOGGER.error( "[" + myRegion + "] cross-region start-delivery failed for topic " + topic.toStringUtf8(), ex); mcb.operationFailed(ctx, ex); } catch (AlreadyStartDeliveryException ex) { LOGGER.error("[" + myRegion + "] cross-region start-delivery failed for topic " + topic.toStringUtf8(), ex); mcb.operationFailed(ctx, new PubSubException.UnexpectedConditionException("cross-region start-delivery failed : " + ex.getMessage())); } } @Override public void operationFailed(Object ctx, PubSubException exception) { if (LOGGER.isDebugEnabled()) LOGGER.error("[" + myRegion + "] cross-region subscribe failed for topic " + topic.toStringUtf8(), exception); if (!synchronous) { putTopicInRetryMap(client, topic); } mcb.operationFailed(ctx, exception); } }, null); } private void retrySubscribe(final HedwigHubClient client, final ByteString topic, final Callback cb) { if (LOGGER.isDebugEnabled()) { LOGGER.debug("[" + myRegion + "] Retry remote subscribe topic : " + topic.toStringUtf8()); } queue.pushAndMaybeRun(topic, queue.new AsynchronousOp(topic, cb, null) { @Override public void run() { Boolean doRemoteSubscribe = topicStatuses.get(topic); // topic has been removed, no retry again if (null == doRemoteSubscribe) { cb.operationFinished(ctx, null); return; } doRemoteSubscribe(client, topic, false, cb, ctx); } }); } @Override public void onFirstLocalSubscribe(final ByteString topic, final boolean synchronous, final Callback cb) { topicStatuses.put(topic, true); // Whenever we acquire a topic due to a (local) subscribe, subscribe on // it to all the other regions (currently using simple all-to-all // topology). queue.pushAndMaybeRun(topic, queue.new AsynchronousOp(topic, cb, null) { @Override public void run() { Callback postCb = synchronous ? cb : CallbackUtils.logger(LOGGER, "[" + myRegion + "] all cross-region subscriptions succeeded", "[" + myRegion + "] at least one cross-region subscription failed"); final Callback mcb = CallbackUtils.multiCallback(clients.size(), postCb, ctx); for (final HedwigHubClient client : clients) { doRemoteSubscribe(client, topic, synchronous, mcb, ctx); } if (!synchronous) cb.operationFinished(null, null); } }); } @Override public void onLastLocalUnsubscribe(final ByteString topic) { topicStatuses.remove(topic); // TODO may want to ease up on the eager unsubscribe; this is dropping // cross-region subscriptions ASAP queue.pushAndMaybeRun(topic, queue.new AsynchronousOp(topic, new Callback() { @Override public void operationFinished(Object ctx, Void result) { if (LOGGER.isDebugEnabled()) LOGGER.debug("[" + myRegion + "] cross-region unsubscribes succeeded for topic " + topic.toStringUtf8()); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (LOGGER.isDebugEnabled()) LOGGER.error("[" + myRegion + "] cross-region unsubscribes failed for topic " + topic.toStringUtf8(), exception); } }, null) { @Override public void run() { Callback mcb = CallbackUtils.multiCallback(clients.size(), cb, ctx); for (final HedwigHubClient client : clients) { final HedwigSubscriber sub = client.getSubscriber(); try { if (!sub.hasSubscription(topic, mySubId)) { if (LOGGER.isDebugEnabled()) { LOGGER.debug("[" + myRegion + "] cross-region subscription for topic " + topic.toStringUtf8() + " has existed before."); } mcb.operationFinished(null, null); continue; } } catch (PubSubException e) { LOGGER.error("[" + myRegion + "] checking cross-region subscription for topic " + topic.toStringUtf8() + " failed (this is should not happen): ", e); mcb.operationFailed(ctx, e); continue; } sub.asyncUnsubscribe(topic, mySubId, mcb, null); } } }); } // Method to shutdown and stop all of the cross-region Hedwig clients. public void stop() { timer.cancel(); for (HedwigHubClient client : clients) { client.close(); } } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/ssl/000077500000000000000000000000001244507361200300665ustar00rootroot00000000000000SslServerContextFactory.java000066400000000000000000000035351244507361200355050ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/ssl/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.ssl; import java.security.KeyStore; import javax.net.ssl.KeyManagerFactory; import javax.net.ssl.SSLContext; import org.apache.hedwig.client.ssl.SslContextFactory; import org.apache.hedwig.server.common.ServerConfiguration; public class SslServerContextFactory extends SslContextFactory { public SslServerContextFactory(ServerConfiguration cfg) { try { // Load our Java key store. KeyStore ks = KeyStore.getInstance("pkcs12"); ks.load(cfg.getCertStream(), cfg.getPassword().toCharArray()); // Like ssh-agent. KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509"); kmf.init(ks, cfg.getPassword().toCharArray()); // Create the SSL context. ctx = SSLContext.getInstance("TLS"); ctx.init(kmf.getKeyManagers(), getTrustManagers(), null); } catch (Exception ex) { throw new RuntimeException(ex); } } @Override protected boolean isClient() { return false; } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/subscriptions/000077500000000000000000000000001244507361200321745ustar00rootroot00000000000000AbstractSubscriptionManager.java000066400000000000000000001116021244507361200404240ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import java.util.ArrayList; import java.util.Map; import java.util.Map.Entry; import java.util.Timer; import java.util.TimerTask; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.atomic.AtomicInteger; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import org.apache.bookkeeper.client.BKException; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionState; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.common.TopicOpQueuer; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.server.topics.TopicOwnershipChangeListener; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.CallbackUtils; import org.apache.hedwig.util.ConcurrencyUtils; public abstract class AbstractSubscriptionManager implements SubscriptionManager, TopicOwnershipChangeListener { static Logger logger = LoggerFactory.getLogger(AbstractSubscriptionManager.class); protected final ServerConfiguration cfg; protected final ConcurrentHashMap> top2sub2seq = new ConcurrentHashMap>(); protected final TopicOpQueuer queuer; private final ArrayList listeners = new ArrayList(); // Handle to the DeliveryManager for the server so we can stop serving subscribers // when losing topics private final DeliveryManager dm; // Handle to the PersistenceManager for the server so we can pass along the // message consume pointers for each topic. private final PersistenceManager pm; // Timer for running a recurring thread task to get the minimum message // sequence ID for each topic that all subscribers for it have consumed // already. With that information, we can call the PersistenceManager to // update it on the messages that are safe to be garbage collected. private final Timer timer = new Timer(true); // In memory mapping of topics to the minimum consumed message sequence ID // for all subscribers to the topic. private final ConcurrentHashMap topic2MinConsumedMessagesMap = new ConcurrentHashMap(); protected final Callback noopCallback = new NoopCallback(); static class NoopCallback implements Callback { @Override public void operationFailed(Object ctx, PubSubException exception) { logger.warn("Exception found in AbstractSubscriptionManager : ", exception); } public void operationFinished(Object ctx, T resultOfOperation) { }; } public AbstractSubscriptionManager(ServerConfiguration cfg, TopicManager tm, PersistenceManager pm, DeliveryManager dm, ScheduledExecutorService scheduler) { this.cfg = cfg; queuer = new TopicOpQueuer(scheduler); tm.addTopicOwnershipChangeListener(this); this.pm = pm; this.dm = dm; // Schedule the recurring MessagesConsumedTask only if a // PersistenceManager is passed. if (pm != null) { timer.schedule(new MessagesConsumedTask(), 0, cfg.getMessagesConsumedThreadRunInterval()); } } /** * This is the Timer Task for finding out for each topic, what the minimum * consumed message by the subscribers are. This information is used to pass * along to the server's PersistenceManager so it can garbage collect older * topic messages that are no longer needed by the subscribers. */ class MessagesConsumedTask extends TimerTask { /** * Implement the TimerTask's abstract run method. */ @Override public void run() { // We are looping through relatively small in memory data structures // so it should be safe to run this fairly often. for (ByteString topic : top2sub2seq.keySet()) { final Map topicSubscriptions = top2sub2seq.get(topic); if (topicSubscriptions == null) { continue; } long minConsumedMessage = Long.MAX_VALUE; boolean hasBound = true; // Loop through all subscribers on the current topic to find the // minimum persisted message id. The reason not using in-memory // consumed message id is LedgerRangs and InMemorySubscriptionState // may be inconsistent in case of a server crash. for (InMemorySubscriptionState curSubscription : topicSubscriptions.values()) { if (curSubscription.getLastPersistedSeqId() < minConsumedMessage) { minConsumedMessage = curSubscription.getLastPersistedSeqId(); } hasBound = hasBound && curSubscription.getSubscriptionPreferences().hasMessageBound(); } boolean callPersistenceManager = true; // Call the PersistenceManager if nobody subscribes to the topic // yet, or the consume pointer has moved ahead since the last // time, or if this is the initial subscription. Long minConsumedFromMap = topic2MinConsumedMessagesMap.get(topic); if (topicSubscriptions.isEmpty() || (minConsumedFromMap != null && minConsumedFromMap < minConsumedMessage) || (minConsumedFromMap == null && minConsumedMessage != 0)) { topic2MinConsumedMessagesMap.put(topic, minConsumedMessage); pm.consumedUntil(topic, minConsumedMessage); } else if (hasBound) { pm.consumeToBound(topic); } } } } private class AcquireOp extends TopicOpQueuer.AsynchronousOp { public AcquireOp(ByteString topic, Callback callback, Object ctx) { queuer.super(topic, callback, ctx); } @Override public void run() { if (top2sub2seq.containsKey(topic)) { cb.operationFinished(ctx, null); return; } readSubscriptions(topic, new Callback>() { @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } @Override public void operationFinished(final Object ctx, final Map resultOfOperation) { // We've just inherited a bunch of subscriber for this // topic, some of which may be local. If they are, then we // need to (1) notify listeners of this and (2) record the // number for bookkeeping so that future // subscribes/unsubscribes can efficiently notify listeners. // The final "commit" (and "abort") operations. final Callback cb2 = new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("Subscription manager failed to acquired topic " + topic.toStringUtf8(), exception); cb.operationFailed(ctx, null); } @Override public void operationFinished(Object ctx, Void voidObj) { top2sub2seq.put(topic, resultOfOperation); logger.info("Subscription manager successfully acquired topic: " + topic.toStringUtf8()); cb.operationFinished(ctx, null); } }; // Notify listeners if necessary. if (hasLocalSubscriptions(resultOfOperation)) { notifyFirstLocalSubscribe(topic, false, cb2, ctx); } else { cb2.operationFinished(ctx, null); } updateMessageBound(topic); } }, ctx); } } private void notifyFirstLocalSubscribe(ByteString topic, boolean synchronous, final Callback cb, final Object ctx) { Callback mcb = CallbackUtils.multiCallback(listeners.size(), cb, ctx); for (SubscriptionEventListener listener : listeners) { listener.onFirstLocalSubscribe(topic, synchronous, mcb); } } /** * Figure out who is subscribed. Do nothing if already acquired. If there's * an error reading the subscribers' sequence IDs, then the topic is not * acquired. * * @param topic * @param callback * @param ctx */ @Override public void acquiredTopic(final ByteString topic, final Callback callback, Object ctx) { queuer.pushAndMaybeRun(topic, new AcquireOp(topic, callback, ctx)); } class ReleaseOp extends TopicOpQueuer.AsynchronousOp { public ReleaseOp(final ByteString topic, final Callback cb, Object ctx) { queuer.super(topic, cb, ctx); } @Override public void run() { Callback finalCb = new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { logger.info("Finished update subscription states when losting topic " + topic.toStringUtf8()); finish(); } @Override public void operationFailed(Object ctx, PubSubException exception) { logger.warn("Error when releasing topic : " + topic.toStringUtf8(), exception); finish(); } private void finish() { // tell delivery manager to stop delivery for subscriptions of this topic final Map topicSubscriptions = top2sub2seq.remove(topic); // no subscriptions now, it may be removed by other release ops if (null != topicSubscriptions) { for (ByteString subId : topicSubscriptions.keySet()) { if (logger.isDebugEnabled()) { logger.debug("Stop serving subscriber (" + topic.toStringUtf8() + ", " + subId.toStringUtf8() + ") when losing topic"); } if (null != dm) { dm.stopServingSubscriber(topic, subId, SubscriptionEvent.TOPIC_MOVED, noopCallback, null); } } } if (logger.isDebugEnabled()) { logger.debug("Stop serving topic " + topic.toStringUtf8()); } // Since we decrement local count when some of remote subscriptions failed, // while we don't unsubscribe those succeed subscriptions. so we can't depends // on local count, just try to notify unsubscribe. notifyLastLocalUnsubscribe(topic); cb.operationFinished(ctx, null); } }; if (logger.isDebugEnabled()) { logger.debug("Try to update subscription states when losing topic " + topic.toStringUtf8()); } updateSubscriptionStates(topic, finalCb, ctx); } } void updateSubscriptionStates(ByteString topic, Callback finalCb, Object ctx) { // Try to update subscription states of a specified topic Map states = top2sub2seq.get(topic); if (null == states) { finalCb.operationFinished(ctx, null); } else { Callback mcb = CallbackUtils.multiCallback(states.size(), finalCb, ctx); for (Entry entry : states.entrySet()) { InMemorySubscriptionState memState = entry.getValue(); if (memState.setLastConsumeSeqIdImmediately()) { updateSubscriptionState(topic, entry.getKey(), memState, mcb, ctx); } else { mcb.operationFinished(ctx, null); } } } } /** * Remove the local mapping. */ @Override public void lostTopic(ByteString topic) { queuer.pushAndMaybeRun(topic, new ReleaseOp(topic, noopCallback, null)); } private void notifyLastLocalUnsubscribe(ByteString topic) { for (SubscriptionEventListener listener : listeners) listener.onLastLocalUnsubscribe(topic); } protected abstract void readSubscriptions(final ByteString topic, final Callback> cb, final Object ctx); protected abstract void readSubscriptionData(final ByteString topic, final ByteString subscriberId, final Callback cb, Object ctx); private class SubscribeOp extends TopicOpQueuer.AsynchronousOp { SubscribeRequest subRequest; MessageSeqId consumeSeqId; public SubscribeOp(ByteString topic, SubscribeRequest subRequest, MessageSeqId consumeSeqId, Callback callback, Object ctx) { queuer.super(topic, callback, ctx); this.subRequest = subRequest; this.consumeSeqId = consumeSeqId; } @Override public void run() { final Map topicSubscriptions = top2sub2seq.get(topic); if (topicSubscriptions == null) { cb.operationFailed(ctx, new PubSubException.ServerNotResponsibleForTopicException("")); return; } final ByteString subscriberId = subRequest.getSubscriberId(); final InMemorySubscriptionState subscriptionState = topicSubscriptions.get(subscriberId); CreateOrAttach createOrAttach = subRequest.getCreateOrAttach(); if (subscriptionState != null) { if (createOrAttach.equals(CreateOrAttach.CREATE)) { String msg = "Topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " requested creating a subscription but it is already subscribed with state: " + SubscriptionStateUtils.toString(subscriptionState.getSubscriptionState()); logger.error(msg); cb.operationFailed(ctx, new PubSubException.ClientAlreadySubscribedException(msg)); return; } // Subscription existed before, check whether new preferences provided // if new preferences provided, merged the subscription data and updated them // TODO: needs ACL mechanism when changing preferences if (subRequest.hasPreferences() && subscriptionState.updatePreferences(subRequest.getPreferences())) { updateSubscriptionPreferences(topic, subscriberId, subscriptionState, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { if (logger.isDebugEnabled()) { logger.debug("Topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " attaching to subscription with state: " + SubscriptionStateUtils.toString(subscriptionState.getSubscriptionState()) + ", with preferences: " + SubscriptionStateUtils.toString(subscriptionState.getSubscriptionPreferences())); } // update message bound if necessary updateMessageBound(topic); cb.operationFinished(ctx, subscriptionState.toSubscriptionData()); } }, ctx); return; } // otherwise just attach if (logger.isDebugEnabled()) { logger.debug("Topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " attaching to subscription with state: " + SubscriptionStateUtils.toString(subscriptionState.getSubscriptionState()) + ", with preferences: " + SubscriptionStateUtils.toString(subscriptionState.getSubscriptionPreferences())); } cb.operationFinished(ctx, subscriptionState.toSubscriptionData()); return; } // we don't have a mapping for this subscriber if (createOrAttach.equals(CreateOrAttach.ATTACH)) { String msg = "Topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " requested attaching to an existing subscription but it is not subscribed"; logger.error(msg); cb.operationFailed(ctx, new PubSubException.ClientNotSubscribedException(msg)); return; } // now the hard case, this is a brand new subscription, must record SubscriptionState.Builder stateBuilder = SubscriptionState.newBuilder().setMsgId(consumeSeqId); SubscriptionPreferences.Builder preferencesBuilder; if (subRequest.hasPreferences()) { preferencesBuilder = SubscriptionPreferences.newBuilder(subRequest.getPreferences()); } else { preferencesBuilder = SubscriptionPreferences.newBuilder(); } // backward compability if (subRequest.hasMessageBound()) { preferencesBuilder = preferencesBuilder.setMessageBound(subRequest.getMessageBound()); } SubscriptionData.Builder subDataBuilder = SubscriptionData.newBuilder().setState(stateBuilder).setPreferences(preferencesBuilder); final SubscriptionData subData = subDataBuilder.build(); createSubscriptionData(topic, subscriberId, subData, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } @Override public void operationFinished(Object ctx, final Version version) { Callback cb2 = new Callback() { @Override public void operationFailed(final Object ctx, final PubSubException exception) { logger.error("subscription for subscriber " + subscriberId.toStringUtf8() + " to topic " + topic.toStringUtf8() + " failed due to failed listener callback", exception); // should remove subscription when synchronized cross-region subscription failed deleteSubscriptionData(topic, subscriberId, version, new Callback() { @Override public void operationFinished(Object context, Void resultOfOperation) { finish(); } @Override public void operationFailed(Object context, PubSubException ex) { logger.error("Remove subscription for subscriber " + subscriberId.toStringUtf8() + " to topic " + topic.toStringUtf8() + " failed : ", ex); finish(); } private void finish() { cb.operationFailed(ctx, exception); } }, ctx); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { topicSubscriptions.put(subscriberId, new InMemorySubscriptionState(subData, version)); updateMessageBound(topic); cb.operationFinished(ctx, subData); } }; // if this will be the first local subscription, notifyFirstLocalSubscribe if (!SubscriptionStateUtils.isHubSubscriber(subRequest.getSubscriberId()) && !hasLocalSubscriptions(topicSubscriptions)) notifyFirstLocalSubscribe(topic, subRequest.getSynchronous(), cb2, ctx); else cb2.operationFinished(ctx, null); } }, ctx); } } /** * @return True if the given subscriberId-to-subscriberState map contains a local subscription: * the vast majority of subscriptions are local, so we will quickly encounter one if it exists. */ private static boolean hasLocalSubscriptions(Map topicSubscriptions) { for (ByteString subId : topicSubscriptions.keySet()) if (!SubscriptionStateUtils.isHubSubscriber(subId)) return true; return false; } public void updateMessageBound(ByteString topic) { final Map topicSubscriptions = top2sub2seq.get(topic); if (topicSubscriptions == null) { return; } int maxBound = Integer.MIN_VALUE; for (Map.Entry e : topicSubscriptions.entrySet()) { if (!e.getValue().getSubscriptionPreferences().hasMessageBound()) { maxBound = Integer.MIN_VALUE; break; } else { maxBound = Math.max(maxBound, e.getValue().getSubscriptionPreferences().getMessageBound()); } } if (maxBound == Integer.MIN_VALUE) { pm.clearMessageBound(topic); } else { pm.setMessageBound(topic, maxBound); } } @Override public void serveSubscribeRequest(ByteString topic, SubscribeRequest subRequest, MessageSeqId consumeSeqId, Callback callback, Object ctx) { queuer.pushAndMaybeRun(topic, new SubscribeOp(topic, subRequest, consumeSeqId, callback, ctx)); } private class ConsumeOp extends TopicOpQueuer.AsynchronousOp { ByteString subscriberId; MessageSeqId consumeSeqId; public ConsumeOp(ByteString topic, ByteString subscriberId, MessageSeqId consumeSeqId, Callback callback, Object ctx) { queuer.super(topic, callback, ctx); this.subscriberId = subscriberId; this.consumeSeqId = consumeSeqId; } @Override public void run() { Map topicSubs = top2sub2seq.get(topic); if (topicSubs == null) { cb.operationFinished(ctx, null); return; } final InMemorySubscriptionState subState = topicSubs.get(subscriberId); if (subState == null) { cb.operationFinished(ctx, null); return; } if (subState.setLastConsumeSeqId(consumeSeqId, cfg.getConsumeInterval())) { updateSubscriptionState(topic, subscriberId, subState, new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { subState.setLastPersistedSeqId(consumeSeqId.getLocalComponent()); cb.operationFinished(ctx, resultOfOperation); } @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } }, ctx); } else { if (logger.isDebugEnabled()) { logger.debug("Only advanced consume pointer in memory, will persist later, topic: " + topic.toStringUtf8() + " subscriberId: " + subscriberId.toStringUtf8() + " persistentState: " + SubscriptionStateUtils.toString(subState.getSubscriptionState()) + " in-memory consume-id: " + MessageIdUtils.msgIdToReadableString(subState.getLastConsumeSeqId())); } cb.operationFinished(ctx, null); } // tell delivery manage about the consume event if (null != dm) { dm.messageConsumed(topic, subscriberId, consumeSeqId); } } } @Override public void setConsumeSeqIdForSubscriber(ByteString topic, ByteString subscriberId, MessageSeqId consumeSeqId, Callback callback, Object ctx) { queuer.pushAndMaybeRun(topic, new ConsumeOp(topic, subscriberId, consumeSeqId, callback, ctx)); } private class CloseSubscriptionOp extends TopicOpQueuer.AsynchronousOp { public CloseSubscriptionOp(ByteString topic, ByteString subscriberId, Callback callback, Object ctx) { queuer.super(topic, callback, ctx); } @Override public void run() { // TODO: BOOKKEEPER-412: we might need to move the loaded subscription // to reclaim memory // But for now we do nothing cb.operationFinished(ctx, null); } } @Override public void closeSubscription(ByteString topic, ByteString subscriberId, Callback callback, Object ctx) { queuer.pushAndMaybeRun(topic, new CloseSubscriptionOp(topic, subscriberId, callback, ctx)); } private class UnsubscribeOp extends TopicOpQueuer.AsynchronousOp { ByteString subscriberId; public UnsubscribeOp(ByteString topic, ByteString subscriberId, Callback callback, Object ctx) { queuer.super(topic, callback, ctx); this.subscriberId = subscriberId; } @Override public void run() { final Map topicSubscriptions = top2sub2seq.get(topic); if (topicSubscriptions == null) { cb.operationFailed(ctx, new PubSubException.ServerNotResponsibleForTopicException("")); return; } if (!topicSubscriptions.containsKey(subscriberId)) { cb.operationFailed(ctx, new PubSubException.ClientNotSubscribedException("")); return; } deleteSubscriptionData(topic, subscriberId, topicSubscriptions.get(subscriberId).getVersion(), new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { topicSubscriptions.remove(subscriberId); // Notify listeners if necessary. if (!SubscriptionStateUtils.isHubSubscriber(subscriberId) && !hasLocalSubscriptions(topicSubscriptions)) notifyLastLocalUnsubscribe(topic); updateMessageBound(topic); cb.operationFinished(ctx, null); } }, ctx); } } @Override public void unsubscribe(ByteString topic, ByteString subscriberId, Callback callback, Object ctx) { queuer.pushAndMaybeRun(topic, new UnsubscribeOp(topic, subscriberId, callback, ctx)); } /** * Not thread-safe. */ @Override public void addListener(SubscriptionEventListener listener) { listeners.add(listener); } /** * Method to stop this class gracefully including releasing any resources * used and stopping all threads spawned. */ public void stop() { timer.cancel(); try { final LinkedBlockingQueue queue = new LinkedBlockingQueue(); // update dirty subscriptions for (ByteString topic : top2sub2seq.keySet()) { Callback finalCb = new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { ConcurrencyUtils.put(queue, true); } @Override public void operationFailed(Object ctx, PubSubException exception) { ConcurrencyUtils.put(queue, false); } }; updateSubscriptionStates(topic, finalCb, null); queue.take(); } } catch (InterruptedException ie) { logger.warn("Error during updating subscription states : ", ie); } } private void updateSubscriptionState(final ByteString topic, final ByteString subscriberId, final InMemorySubscriptionState state, final Callback callback, Object ctx) { SubscriptionData subData; Callback cb = new Callback() { @Override public void operationFinished(Object ctx, Version version) { state.setVersion(version); callback.operationFinished(ctx, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (exception instanceof PubSubException.BadVersionException) { readSubscriptionData(topic, subscriberId, new Callback() { @Override public void operationFinished(Object ctx, InMemorySubscriptionState resultOfOperation) { state.setVersion(resultOfOperation.getVersion()); updateSubscriptionState(topic, subscriberId, state, callback, ctx); } @Override public void operationFailed(Object ctx, PubSubException exception) { callback.operationFailed(ctx, exception); } }, ctx); return; } callback.operationFailed(ctx, exception); } }; if (isPartialUpdateSupported()) { subData = SubscriptionData.newBuilder().setState(state.getSubscriptionState()).build(); updateSubscriptionData(topic, subscriberId, subData, state.getVersion(), cb, ctx); } else { subData = state.toSubscriptionData(); replaceSubscriptionData(topic, subscriberId, subData, state.getVersion(), cb, ctx); } } private void updateSubscriptionPreferences(final ByteString topic, final ByteString subscriberId, final InMemorySubscriptionState state, final Callback callback, Object ctx) { SubscriptionData subData; Callback cb = new Callback() { @Override public void operationFinished(Object ctx, Version version) { state.setVersion(version); callback.operationFinished(ctx, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (exception instanceof PubSubException.BadVersionException) { readSubscriptionData(topic, subscriberId, new Callback() { @Override public void operationFinished(Object ctx, InMemorySubscriptionState resultOfOperation) { state.setVersion(resultOfOperation.getVersion()); updateSubscriptionPreferences(topic, subscriberId, state, callback, ctx); } @Override public void operationFailed(Object ctx, PubSubException exception) { callback.operationFailed(ctx, exception); } }, ctx); return; } callback.operationFailed(ctx, exception); } }; if (isPartialUpdateSupported()) { subData = SubscriptionData.newBuilder().setPreferences(state.getSubscriptionPreferences()).build(); updateSubscriptionData(topic, subscriberId, subData, state.getVersion(), cb, ctx); } else { subData = state.toSubscriptionData(); replaceSubscriptionData(topic, subscriberId, subData, state.getVersion(), cb, ctx); } } protected abstract boolean isPartialUpdateSupported(); protected abstract void createSubscriptionData(final ByteString topic, ByteString subscriberId, SubscriptionData data, Callback callback, Object ctx); protected abstract void updateSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData data, Version version, Callback callback, Object ctx); protected abstract void replaceSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData data, Version version, Callback callback, Object ctx); protected abstract void deleteSubscriptionData(ByteString topic, ByteString subscriberId, Version version, Callback callback, Object ctx); } AllToAllTopologyFilter.java000066400000000000000000000056311244507361200373340ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import java.io.IOException; import com.google.protobuf.ByteString; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; import org.apache.hedwig.filter.MessageFilterBase; import org.apache.hedwig.filter.ServerMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.common.ServerConfiguration; public class AllToAllTopologyFilter implements ServerMessageFilter { ByteString subscriberRegion; boolean isHubSubscriber; @Override public ServerMessageFilter initialize(Configuration conf) throws ConfigurationException, IOException { String region = conf.getString(ServerConfiguration.REGION, "standalone"); if (null == region) { throw new IOException("No region found to run " + getClass().getName()); } subscriberRegion = ByteString.copyFromUtf8(region); return this; } @Override public void uninitialize() { // do nothing now } @Override public MessageFilterBase setSubscriptionPreferences(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences) { isHubSubscriber = SubscriptionStateUtils.isHubSubscriber(subscriberId); return this; } @Override public boolean testMessage(Message message) { // We're using a simple all-to-all network topology, so no region // should ever need to forward messages to any other region. // Otherwise, with the current logic, messages will end up // ping-pong-ing back and forth between regions with subscriptions // to each other without termination (or in any other cyclic // configuration). if (isHubSubscriber && !message.getSrcRegion().equals(subscriberRegion)) { return false; } else { return true; } } } InMemorySubscriptionManager.java000066400000000000000000000125731244507361200404270ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ScheduledExecutorService; import com.google.protobuf.ByteString; import org.apache.bookkeeper.versioning.Version; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; public class InMemorySubscriptionManager extends AbstractSubscriptionManager { // Backup for top2sub2seq final ConcurrentHashMap> top2sub2seqBackup = new ConcurrentHashMap>(); public InMemorySubscriptionManager(ServerConfiguration conf, TopicManager tm, PersistenceManager pm, DeliveryManager dm, ScheduledExecutorService scheduler) { super(conf, tm, pm, dm, scheduler); } @Override protected void createSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData subData, Callback callback, Object ctx) { // nothing to do, in-memory info is already recorded by base class callback.operationFinished(ctx, null); } @Override protected void deleteSubscriptionData(ByteString topic, ByteString subscriberId, Version version, Callback callback, Object ctx) { // nothing to do, in-memory info is already deleted by base class callback.operationFinished(ctx, null); } @Override protected boolean isPartialUpdateSupported() { return false; } @Override protected void updateSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData data, Version version, Callback callback, Object ctx) { throw new UnsupportedOperationException("Doesn't support partial update"); } @Override protected void replaceSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData data, Version version, Callback callback, Object ctx) { // nothing to do, in-memory info is already updated by base class callback.operationFinished(ctx, null); } @Override public void lostTopic(ByteString topic) { // Backup topic-sub2seq map for readSubscriptions final Map sub2seq = top2sub2seq.get(topic); if (null != sub2seq) top2sub2seqBackup.put(topic, sub2seq); if (logger.isDebugEnabled()) { logger.debug("InMemorySubscriptionManager is losing topic " + topic.toStringUtf8()); } queuer.pushAndMaybeRun(topic, new ReleaseOp(topic, noopCallback, null)); } @Override protected void readSubscriptions(ByteString topic, Callback> cb, Object ctx) { // Since we backed up in-memory information on lostTopic, we can just return that back Map topicSubs = top2sub2seqBackup.remove(topic); if (topicSubs != null) { cb.operationFinished(ctx, topicSubs); } else { cb.operationFinished(ctx, new ConcurrentHashMap()); } } @Override protected void readSubscriptionData(ByteString topic, ByteString subscriberId, Callback cb, Object ctx) { // Since we backed up in-memory information on lostTopic, we can just return that back Map sub2seqBackup = top2sub2seqBackup.get(topic); if (sub2seqBackup == null) { cb.operationFinished(ctx, new InMemorySubscriptionState( SubscriptionData.getDefaultInstance(), Version.NEW)); return; } InMemorySubscriptionState subState = sub2seqBackup.remove(subscriberId); if (subState != null) { cb.operationFinished(ctx, subState); } else { cb.operationFinished(ctx, new InMemorySubscriptionState( SubscriptionData.getDefaultInstance(), Version.NEW)); } } } InMemorySubscriptionState.java000066400000000000000000000175651244507361200401430ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import java.util.HashMap; import java.util.Map; import com.google.protobuf.ByteString; import org.apache.bookkeeper.versioning.Version; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionState; import org.apache.hedwig.protoextensions.MapUtils; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; public class InMemorySubscriptionState { SubscriptionState subscriptionState; SubscriptionPreferences subscriptionPreferences; MessageSeqId lastConsumeSeqId; Version version; long lastPersistedSeqId; public InMemorySubscriptionState(SubscriptionData subscriptionData, Version version, MessageSeqId lastConsumeSeqId) { this.subscriptionState = subscriptionData.getState(); if (subscriptionData.hasPreferences()) { this.subscriptionPreferences = subscriptionData.getPreferences(); } else { // set initial subscription preferences SubscriptionPreferences.Builder prefsBuilder = SubscriptionPreferences.newBuilder(); // progate the old system preferences from subscription state to preferences prefsBuilder.setMessageBound(subscriptionState.getMessageBound()); this.subscriptionPreferences = prefsBuilder.build(); } this.lastConsumeSeqId = lastConsumeSeqId; this.version = version; this.lastPersistedSeqId = subscriptionState.getMsgId().getLocalComponent(); } public InMemorySubscriptionState(SubscriptionData subscriptionData, Version version) { this(subscriptionData, version, subscriptionData.getState().getMsgId()); } public SubscriptionData toSubscriptionData() { SubscriptionState.Builder stateBuilder = SubscriptionState.newBuilder(subscriptionState).setMsgId(lastConsumeSeqId); return SubscriptionData.newBuilder().setState(stateBuilder) .setPreferences(subscriptionPreferences) .build(); } public SubscriptionState getSubscriptionState() { return subscriptionState; } public SubscriptionPreferences getSubscriptionPreferences() { return subscriptionPreferences; } public MessageSeqId getLastConsumeSeqId() { return lastConsumeSeqId; } public Version getVersion() { return version; } public void setVersion(Version version) { this.version = version; } /** * * @param lastConsumeSeqId * @param consumeInterval * The amount of laziness we want in persisting the consume * pointers * @return true if the resulting structure needs to be persisted, false * otherwise */ public boolean setLastConsumeSeqId(MessageSeqId lastConsumeSeqId, int consumeInterval) { long interval = lastConsumeSeqId.getLocalComponent() - subscriptionState.getMsgId().getLocalComponent(); if (interval <= 0) { return false; } // set consume seq id when it is larger this.lastConsumeSeqId = lastConsumeSeqId; if (interval < consumeInterval) { return false; } // subscription state will be updated, marked it as clean subscriptionState = SubscriptionState.newBuilder(subscriptionState).setMsgId(lastConsumeSeqId).build(); return true; } /** * Set lastConsumeSeqId Immediately * * @return true if the resulting structure needs to be persisted, false otherwise */ public boolean setLastConsumeSeqIdImmediately() { long interval = lastConsumeSeqId.getLocalComponent() - subscriptionState.getMsgId().getLocalComponent(); // no need to set if (interval <= 0) { return false; } subscriptionState = SubscriptionState.newBuilder(subscriptionState).setMsgId(lastConsumeSeqId).build(); return true; } public long getLastPersistedSeqId() { return lastPersistedSeqId; } public void setLastPersistedSeqId(long lastPersistedSeqId) { this.lastPersistedSeqId = lastPersistedSeqId; } /** * Update preferences. * * @return true if preferences is updated, which needs to be persisted, false otherwise. */ public boolean updatePreferences(SubscriptionPreferences preferences) { boolean changed = false; SubscriptionPreferences.Builder newPreferencesBuilder = SubscriptionPreferences.newBuilder(subscriptionPreferences); if (preferences.hasMessageBound()) { if (!subscriptionPreferences.hasMessageBound() || subscriptionPreferences.getMessageBound() != preferences.getMessageBound()) { newPreferencesBuilder.setMessageBound(preferences.getMessageBound()); changed = true; } } if (preferences.hasMessageFilter()) { if (!subscriptionPreferences.hasMessageFilter() || !subscriptionPreferences.getMessageFilter().equals(preferences.getMessageFilter())) { newPreferencesBuilder.setMessageFilter(preferences.getMessageFilter()); changed = true; } } if (preferences.hasMessageWindowSize()) { if (!subscriptionPreferences.hasMessageWindowSize() || subscriptionPreferences.getMessageWindowSize() != preferences.getMessageWindowSize()) { newPreferencesBuilder.setMessageWindowSize(preferences.getMessageWindowSize()); changed = true; } } if (preferences.hasOptions()) { Map userOptions = SubscriptionStateUtils.buildUserOptions(subscriptionPreferences); Map optUpdates = SubscriptionStateUtils.buildUserOptions(preferences); boolean optChanged = false; for (Map.Entry entry : optUpdates.entrySet()) { String key = entry.getKey(); if (userOptions.containsKey(key)) { if (null == entry.getValue()) { userOptions.remove(key); optChanged = true; } else { if (!entry.getValue().equals(userOptions.get(key))) { userOptions.put(key, entry.getValue()); optChanged = true; } } } else { userOptions.put(key, entry.getValue()); optChanged = true; } } if (optChanged) { changed = true; newPreferencesBuilder.setOptions(MapUtils.buildMapBuilder(userOptions)); } } if (changed) { subscriptionPreferences = newPreferencesBuilder.build(); } return changed; } } MMSubscriptionManager.java000066400000000000000000000141241244507361200371730ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import java.io.IOException; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ScheduledExecutorService; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.meta.SubscriptionDataManager; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; /** * MetaManager-based subscription manager. */ public class MMSubscriptionManager extends AbstractSubscriptionManager { SubscriptionDataManager subManager; public MMSubscriptionManager(ServerConfiguration cfg, MetadataManagerFactory metaManagerFactory, TopicManager topicMgr, PersistenceManager pm, DeliveryManager dm, ScheduledExecutorService scheduler) { super(cfg, topicMgr, pm, dm, scheduler); this.subManager = metaManagerFactory.newSubscriptionDataManager(); } @Override protected void readSubscriptions(final ByteString topic, final Callback> cb, final Object ctx) { subManager.readSubscriptions(topic, new Callback>>() { @Override public void operationFailed(Object ctx, PubSubException pse) { cb.operationFailed(ctx, pse); } @Override public void operationFinished(Object ctx, Map> subs) { Map results = new ConcurrentHashMap(); for (Map.Entry> subEntry : subs.entrySet()) { Versioned vv = subEntry.getValue(); results.put(subEntry.getKey(), new InMemorySubscriptionState(vv.getValue(), vv.getVersion())); } cb.operationFinished(ctx, results); } }, ctx); } @Override protected void readSubscriptionData(final ByteString topic, final ByteString subscriberId, final Callback cb, final Object ctx) { subManager.readSubscriptionData(topic, subscriberId, new Callback>() { @Override public void operationFinished(Object ctx, Versioned subData) { if (null != subData) { cb.operationFinished(ctx, new InMemorySubscriptionState(subData.getValue(), subData.getVersion())); } else { cb.operationFinished(ctx, new InMemorySubscriptionState( SubscriptionData.getDefaultInstance(), Version.NEW)); } } @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, exception); } }, ctx); } @Override protected boolean isPartialUpdateSupported() { return subManager.isPartialUpdateSupported(); } @Override protected void createSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData subData, final Callback callback, final Object ctx) { subManager.createSubscriptionData(topic, subscriberId, subData, callback, ctx); } @Override protected void replaceSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData subData, final Version version, final Callback callback, final Object ctx) { subManager.replaceSubscriptionData(topic, subscriberId, subData, version, callback, ctx); } @Override protected void updateSubscriptionData(final ByteString topic, final ByteString subscriberId, final SubscriptionData subData, final Version version, final Callback callback, final Object ctx) { subManager.updateSubscriptionData(topic, subscriberId, subData, version, callback, ctx); } @Override protected void deleteSubscriptionData(final ByteString topic, final ByteString subscriberId, Version version, final Callback callback, final Object ctx) { subManager.deleteSubscriptionData(topic, subscriberId, version, callback, ctx); } @Override public void stop() { super.stop(); try { subManager.close(); } catch (IOException ioe) { logger.warn("Exception closing subscription data manager : ", ioe); } } } SubscriptionEventListener.java000066400000000000000000000042751244507361200401640ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import com.google.protobuf.ByteString; import org.apache.hedwig.util.Callback; /** * For listening to events that are issued by a SubscriptionManager. * */ public interface SubscriptionEventListener { /** * Called by the subscription manager when it previously had zero local * subscribers for a topic and is currently accepting its first local * subscriber. * * @param topic * The topic of interest. * @param synchronous * Whether this request was actually initiated by a new local * subscriber, or whether it was an existing subscription * inherited by the hub (e.g. when recovering the state from ZK). * @param cb * The subscription will not complete until success is called on * this callback. An error on cb will result in a subscription * error. */ public void onFirstLocalSubscribe(ByteString topic, boolean synchronous, Callback cb); /** * Called by the SubscriptionManager when it previously had non-zero local * subscribers for a topic and is currently dropping its last local * subscriber. This is fully asynchronous so there is no callback. * * @param topic * The topic of interest. */ public void onLastLocalUnsubscribe(ByteString topic); } SubscriptionManager.java000066400000000000000000000103711244507361200367410ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.util.Callback; /** * All methods are thread-safe. */ public interface SubscriptionManager { /** * * Register a new subscription for the given subscriber for the given topic. * This method should reliably persist the existence of the subscription in * a way that it can't be lost. If the subscription already exists, * depending on the create or attach flag in the subscribe request, an * exception may be returned. * * This is an asynchronous method. * * @param topic * @param subRequest * @param consumeSeqId * The seqId to start serving the subscription from, if this is a * brand new subscription * @param callback * The subscription data returned by the callback. * @param ctx */ public void serveSubscribeRequest(ByteString topic, SubscribeRequest subRequest, MessageSeqId consumeSeqId, Callback callback, Object ctx); /** * Set the consume position of a given subscriber on a given topic. Note * that this method need not persist the consume position immediately but * can be lazy and persist it later asynchronously, if that is more * efficient. * * @param topic * @param subscriberId * @param consumeSeqId */ public void setConsumeSeqIdForSubscriber(ByteString topic, ByteString subscriberId, MessageSeqId consumeSeqId, Callback callback, Object ctx); /** * Close a particular subscription * * @param topic * Topic Name * @param subscriberId * Subscriber Id * @param callback * Callback * @param ctx * Callback context */ public void closeSubscription(ByteString topic, ByteString subscriberId, Callback callback, Object ctx); /** * Delete a particular subscription * * @param topic * @param subscriberId */ public void unsubscribe(ByteString topic, ByteString subscriberId, Callback callback, Object ctx); // Management API methods that we will fill in later // /** // * Get the ids of all subscribers for a given topic // * // * @param topic // * @return A list of subscriber ids that are currently subscribed to the // * given topic // */ // public List getSubscriptionsForTopic(ByteString topic); // // /** // * Get the topics to which a given subscriber is subscribed to // * // * @param subscriberId // * @return A list of the topics to which the given subscriber is // subscribed // * to // * @throws ServiceDownException // * If there is an error in looking up the subscription // * information // */ // public List getTopicsForSubscriber(ByteString subscriberId) // throws ServiceDownException; /** * Add a listener that is notified when topic-subscription pairs are added * or removed. */ public void addListener(SubscriptionEventListener listener); /** * Stop Subscription Manager */ public void stop(); } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/000077500000000000000000000000001244507361200305665ustar00rootroot00000000000000AbstractTopicManager.java000066400000000000000000000176601244507361200354210ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import java.net.UnknownHostException; import java.util.ArrayList; import java.util.Collections; import java.util.HashSet; import java.util.Set; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.common.TopicOpQueuer; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.CallbackUtils; import org.apache.hedwig.util.HedwigSocketAddress; public abstract class AbstractTopicManager implements TopicManager { /** * My name. */ protected HedwigSocketAddress addr; /** * Topic change listeners. */ protected ArrayList listeners = new ArrayList(); /** * List of topics I believe I am responsible for. */ protected Set topics = Collections.synchronizedSet(new HashSet()); protected TopicOpQueuer queuer; protected ServerConfiguration cfg; protected ScheduledExecutorService scheduler; private static final Logger logger = LoggerFactory.getLogger(AbstractTopicManager.class); private class GetOwnerOp extends TopicOpQueuer.AsynchronousOp { public boolean shouldClaim; public GetOwnerOp(final ByteString topic, boolean shouldClaim, final Callback cb, Object ctx) { queuer.super(topic, cb, ctx); this.shouldClaim = shouldClaim; } @Override public void run() { realGetOwner(topic, shouldClaim, cb, ctx); } } private class ReleaseOp extends TopicOpQueuer.AsynchronousOp { public ReleaseOp(ByteString topic, Callback cb, Object ctx) { queuer.super(topic, cb, ctx); } @Override public void run() { if (!topics.contains(topic)) { cb.operationFinished(ctx, null); return; } realReleaseTopic(topic, cb, ctx); } } public AbstractTopicManager(ServerConfiguration cfg, ScheduledExecutorService scheduler) throws UnknownHostException { this.cfg = cfg; this.queuer = new TopicOpQueuer(scheduler); this.scheduler = scheduler; addr = cfg.getServerAddr(); } @Override public synchronized void addTopicOwnershipChangeListener(TopicOwnershipChangeListener listener) { listeners.add(listener); } protected final synchronized void notifyListenersAndAddToOwnedTopics(final ByteString topic, final Callback originalCallback, final Object originalContext) { Callback postCb = new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { topics.add(topic); if (cfg.getRetentionSecs() > 0) { scheduler.schedule(new Runnable() { @Override public void run() { // Enqueue a release operation. (Recall that release // doesn't "fail" even if the topic is missing.) releaseTopic(topic, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("failure that should never happen when periodically releasing topic " + topic, exception); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { if (logger.isDebugEnabled()) { logger.debug("successful periodic release of topic " + topic.toStringUtf8()); } } }, null); } }, cfg.getRetentionSecs(), TimeUnit.SECONDS); } originalCallback.operationFinished(originalContext, addr); } @Override public void operationFailed(final Object ctx, final PubSubException exception) { // TODO: optimization: we can release this as soon as we experience the first error. Callback cb = new Callback() { public void operationFinished(Object _ctx, Void _resultOfOperation) { originalCallback.operationFailed(ctx, exception); } public void operationFailed(Object _ctx, PubSubException _exception) { logger.error("Exception releasing topic", _exception); originalCallback.operationFailed(ctx, exception); } }; realReleaseTopic(topic, cb, originalContext); } }; Callback mcb = CallbackUtils.multiCallback(listeners.size(), postCb, null); for (TopicOwnershipChangeListener listener : listeners) { listener.acquiredTopic(topic, mcb, null); } } private void realReleaseTopic(ByteString topic, Callback callback, Object ctx) { for (TopicOwnershipChangeListener listener : listeners) listener.lostTopic(topic); topics.remove(topic); postReleaseCleanup(topic, callback, ctx); } @Override public final void getOwner(ByteString topic, boolean shouldClaim, Callback cb, Object ctx) { queuer.pushAndMaybeRun(topic, new GetOwnerOp(topic, shouldClaim, cb, ctx)); } @Override public final void releaseTopic(ByteString topic, Callback cb, Object ctx) { queuer.pushAndMaybeRun(topic, new ReleaseOp(topic, cb, ctx)); } /** * This method should "return" the owner of the topic if one has been chosen * already. If there is no pre-chosen owner, either this hub or some other * should be chosen based on the shouldClaim parameter. If its ends up * choosing this hub as the owner, the {@code * AbstractTopicManager#notifyListenersAndAddToOwnedTopics(ByteString, * OperationCallback, Object)} method must be called. * */ protected abstract void realGetOwner(ByteString topic, boolean shouldClaim, Callback cb, Object ctx); /** * The method should do any cleanup necessary to indicate to other hubs that * this topic has been released */ protected abstract void postReleaseCleanup(ByteString topic, Callback cb, Object ctx); @Override public void stop() { // do nothing now } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/HubInfo.java000066400000000000000000000117221244507361200327660ustar00rootroot00000000000000package org.apache.hedwig.server.topics; /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ import java.io.BufferedReader; import java.io.IOException; import java.io.StringReader; import org.apache.hedwig.protocol.PubSubProtocol.HubInfoData; import org.apache.hedwig.util.HedwigSocketAddress; import com.google.protobuf.InvalidProtocolBufferException; import com.google.protobuf.TextFormat; /** * Info identifies a hub server. */ public class HubInfo { public static class InvalidHubInfoException extends Exception { public InvalidHubInfoException(String msg) { super(msg); } public InvalidHubInfoException(String msg, Throwable t) { super(msg, t); } } // address identify a hub server final HedwigSocketAddress addr; // its znode czxid final long czxid; // protobuf encoded hub info data to be serialized HubInfoData hubInfoData; public HubInfo(HedwigSocketAddress addr, long czxid) { this(addr, czxid, null); } protected HubInfo(HedwigSocketAddress addr, long czxid, HubInfoData data) { this.addr = addr; this.czxid = czxid; this.hubInfoData = data; } public HedwigSocketAddress getAddress() { return addr; } public long getZxid() { return czxid; } private synchronized HubInfoData getHubInfoData() { if (null == hubInfoData) { hubInfoData = HubInfoData.newBuilder().setHostname(addr.toString()) .setCzxid(czxid).build(); } return hubInfoData; } @Override public String toString() { return TextFormat.printToString(getHubInfoData()); } @Override public boolean equals(Object o) { if (null == o) { return false; } if (!(o instanceof HubInfo)) { return false; } HubInfo other = (HubInfo)o; if (null == addr) { if (null == other.addr) { return true; } else { return czxid == other.czxid; } } else { if (addr.equals(other.addr)) { return czxid == other.czxid; } else { return false; } } } @Override public int hashCode() { return addr.hashCode(); } /** * Parse hub info from a string. * * @param hubInfoStr * String representation of hub info * @return hub info * @throws InvalidHubInfoException when hubInfoStr is not a valid * string representation of hub info. */ public static HubInfo parse(String hubInfoStr) throws InvalidHubInfoException { // it is not protobuf encoded hub info, it might be generated by ZkTopicManager if (!hubInfoStr.startsWith("hostname")) { final HedwigSocketAddress owner; try { owner = new HedwigSocketAddress(hubInfoStr); } catch (Exception e) { throw new InvalidHubInfoException("Corrupted hub server address : " + hubInfoStr, e); } return new HubInfo(owner, 0L); } // it is a protobuf encoded hub info. HubInfoData hubInfoData; try { BufferedReader reader = new BufferedReader( new StringReader(hubInfoStr)); HubInfoData.Builder dataBuilder = HubInfoData.newBuilder(); TextFormat.merge(reader, dataBuilder); hubInfoData = dataBuilder.build(); } catch (InvalidProtocolBufferException ipbe) { throw new InvalidHubInfoException("Corrupted hub info : " + hubInfoStr, ipbe); } catch (IOException ie) { throw new InvalidHubInfoException("Corrupted hub info : " + hubInfoStr, ie); } final HedwigSocketAddress owner; try { owner = new HedwigSocketAddress(hubInfoData.getHostname().trim()); } catch (Exception e) { throw new InvalidHubInfoException("Corrupted hub server address : " + hubInfoData.getHostname(), e); } long ownerZxid = hubInfoData.getCzxid(); return new HubInfo(owner, ownerZxid, hubInfoData); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/HubLoad.java000066400000000000000000000101461244507361200327510ustar00rootroot00000000000000/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.hedwig.server.topics; import java.io.BufferedReader; import java.io.IOException; import java.io.StringReader; import org.apache.hedwig.protocol.PubSubProtocol.HubLoadData; import com.google.protobuf.InvalidProtocolBufferException; import com.google.protobuf.TextFormat; /** * This class encapsulates metrics for determining the load on a hub server. */ public class HubLoad implements Comparable { public static final HubLoad MAX_LOAD = new HubLoad(Long.MAX_VALUE); public static final HubLoad MIN_LOAD = new HubLoad(0); public static class InvalidHubLoadException extends Exception { public InvalidHubLoadException(String msg) { super(msg); } public InvalidHubLoadException(String msg, Throwable t) { super(msg, t); } } // how many topics that a hub server serves long numTopics; public HubLoad(long num) { this.numTopics = num; } public HubLoad(HubLoadData data) { this.numTopics = data.getNumTopics(); } public HubLoad setNumTopics(long numTopics) { this.numTopics = numTopics; return this; } public HubLoadData toHubLoadData() { return HubLoadData.newBuilder().setNumTopics(numTopics).build(); } @Override public String toString() { return TextFormat.printToString(toHubLoadData()); } @Override public boolean equals(Object o) { if (null == o || !(o instanceof HubLoad)) { return false; } return 0 == compareTo((HubLoad)o); } @Override public int compareTo(HubLoad other) { return numTopics > other.numTopics ? 1 : (numTopics < other.numTopics ? -1 : 0); } @Override public int hashCode() { return (int)numTopics; } /** * Parse hub load from a string. * * @param hubLoadStr * String representation of hub load * @return hub load * @throws InvalidHubLoadException when hubLoadStr is not a valid * string representation of hub load. */ public static HubLoad parse(String hubLoadStr) throws InvalidHubLoadException { // it is no protobuf encoded hub info, it might be generated by ZkTopicManager if (!hubLoadStr.startsWith("numTopics")) { try { long numTopics = Long.parseLong(hubLoadStr, 10); return new HubLoad(numTopics); } catch (NumberFormatException nfe) { throw new InvalidHubLoadException("Corrupted hub load data : " + hubLoadStr, nfe); } } // it it a protobuf encoded hub load data. HubLoadData hubLoadData; try { BufferedReader reader = new BufferedReader( new StringReader(hubLoadStr)); HubLoadData.Builder dataBuilder = HubLoadData.newBuilder(); TextFormat.merge(reader, dataBuilder); hubLoadData = dataBuilder.build(); } catch (InvalidProtocolBufferException ipbe) { throw new InvalidHubLoadException("Corrupted hub load data : " + hubLoadStr, ipbe); } catch (IOException ie) { throw new InvalidHubLoadException("Corrupted hub load data : " + hubLoadStr, ie); } return new HubLoad(hubLoadData); } } HubServerManager.java000066400000000000000000000065221244507361200345570ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import java.io.IOException; import org.apache.hedwig.util.Callback; /** * The HubServerManager class manages info about hub servers. */ interface HubServerManager { static interface ManagerListener { /** * Server manager is suspended if encountering some transient errors. * {@link #onResume()} would be called if those errors could be fixed. * {@link #onShutdown()} would be called if those errors could not be fixed. */ public void onSuspend(); /** * Server manager is resumed after fixing some transient errors. */ public void onResume(); /** * Server manager had to shutdown due to unrecoverable errors. */ public void onShutdown(); } /** * Register a listener to listen events of server manager * * @param listener * Server Manager Listener */ public void registerListener(ManagerListener listener); /** * Register itself to the cluster. * * @param selfLoad * Self load data * @param callback * Callback when itself registered. * @param ctx * Callback context. */ public void registerSelf(HubLoad selfLoad, Callback callback, Object ctx); /** * Unregister itself from the cluster. */ public void unregisterSelf() throws IOException; /** * Uploading self server load data. * * It is an asynchrounous call which should not block other operations. * Currently we don't need to care about whether it succeed or not. * * @param selfLoad * Hub server load data. */ public void uploadSelfLoadData(HubLoad selfLoad); /** * Check whether a hub server is alive as the id * * @param hub * Hub id to identify a lifecycle of a hub server * @param callback * Callback of check result. If the hub server is still * alive as the provided id hub, return true. * Otherwise return false. * @param ctx * Callback context */ public void isHubAlive(HubInfo hub, Callback callback, Object ctx); /** * Choose a least loaded hub server from available hub servers. * * @param callback * Callback to return least loaded hub server. * @param ctx * Callback context. */ public void chooseLeastLoadedHub(Callback callback, Object ctx); } MMTopicManager.java000066400000000000000000000375441244507361200341720ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.hedwig.server.topics; import java.net.UnknownHostException; import java.io.IOException; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.SynchronousQueue; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.meta.TopicOwnershipManager; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Either; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.zookeeper.ZooKeeper; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; /** * TopicOwnershipManager based topic manager */ public class MMTopicManager extends AbstractTopicManager implements TopicManager { static Logger logger = LoggerFactory.getLogger(MMTopicManager.class); // topic ownership manager private final TopicOwnershipManager mm; // hub server manager private final HubServerManager hubManager; private final HubInfo myHubInfo; private final HubLoad myHubLoad; // Boolean flag indicating if we should suspend activity. If this is true, // all of the Ops put into the queuer will fail automatically. protected volatile boolean isSuspended = false; public MMTopicManager(ServerConfiguration cfg, ZooKeeper zk, MetadataManagerFactory mmFactory, ScheduledExecutorService scheduler) throws UnknownHostException, PubSubException { super(cfg, scheduler); // initialize topic ownership manager this.mm = mmFactory.newTopicOwnershipManager(); this.hubManager = new ZkHubServerManager(cfg, zk, addr); final SynchronousQueue> queue = new SynchronousQueue>(); myHubLoad = new HubLoad(topics.size()); this.hubManager.registerListener(new HubServerManager.ManagerListener() { @Override public void onSuspend() { isSuspended = true; } @Override public void onResume() { isSuspended = false; } @Override public void onShutdown() { // if hub server manager can't work, we had to quit Runtime.getRuntime().exit(1); } }); this.hubManager.registerSelf(myHubLoad, new Callback() { @Override public void operationFinished(final Object ctx, final HubInfo resultOfOperation) { logger.info("Successfully registered hub {} with zookeeper", resultOfOperation); ConcurrencyUtils.put(queue, Either.of(resultOfOperation, (PubSubException) null)); } @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("Failed to register hub with zookeeper", exception); ConcurrencyUtils.put(queue, Either.of((HubInfo)null, exception)); } }, null); Either result = ConcurrencyUtils.take(queue); PubSubException pse = result.right(); if (pse != null) { throw pse; } myHubInfo = result.left(); logger.info("Start metadata manager based topic manager with hub id : " + myHubInfo); } @Override protected void realGetOwner(final ByteString topic, final boolean shouldClaim, final Callback cb, final Object ctx) { // If operations are suspended due to a ZK client disconnect, just error // out this call and return. if (isSuspended) { cb.operationFailed(ctx, new PubSubException.ServiceDownException( "MMTopicManager service is temporarily suspended!")); return; } if (topics.contains(topic)) { cb.operationFinished(ctx, addr); return; } new MMGetOwnerOp(topic, cb, ctx).read(); } /** * MetadataManager do topic ledger election using versioned writes. */ class MMGetOwnerOp { ByteString topic; Callback cb; Object ctx; public MMGetOwnerOp(ByteString topic, Callback cb, Object ctx) { this.topic = topic; this.cb = cb; this.ctx = ctx; } protected void read() { mm.readOwnerInfo(topic, new Callback>() { @Override public void operationFinished(final Object ctx, final Versioned owner) { if (null == owner) { logger.info("{} : No owner found for topic {}", new Object[] { addr, topic.toStringUtf8() }); // no data found choose(Version.NEW); return; } final Version ownerVersion = owner.getVersion(); if (null == owner.getValue()) { logger.info("{} : Invalid owner found for topic {}", new Object[] { addr, topic.toStringUtf8() }); choose(ownerVersion); return; } final HubInfo hub = owner.getValue(); logger.info("{} : Read owner of topic {} : {}", new Object[] { addr, topic.toStringUtf8(), hub }); logger.info("{}, {}", new Object[] { hub, myHubInfo }); if (hub.getAddress().equals(addr)) { if (myHubInfo.getZxid() == hub.getZxid()) { claimTopic(ctx); return; } else { choose(ownerVersion); return; } } logger.info("{} : Check whether owner {} for topic {} is still alive.", new Object[] { addr, hub, topic.toStringUtf8() }); hubManager.isHubAlive(hub, new Callback() { @Override public void operationFinished(Object ctx, Boolean isAlive) { if (isAlive) { cb.operationFinished(ctx, hub.getAddress()); } else { choose(ownerVersion); } } @Override public void operationFailed(Object ctx, PubSubException pse) { cb.operationFailed(ctx, pse); } }, ctx); } @Override public void operationFailed(Object ctx, PubSubException exception) { cb.operationFailed(ctx, new PubSubException.ServiceDownException( "Could not read ownership for topic " + topic.toStringUtf8() + " : " + exception.getMessage())); } }, ctx); } public void claim(final Version prevOwnerVersion) { logger.info("{} : claiming topic {} 's owner to be {}", new Object[] { addr, topic.toStringUtf8(), myHubInfo }); mm.writeOwnerInfo(topic, myHubInfo, prevOwnerVersion, new Callback() { @Override public void operationFinished(Object ctx, Version newVersion) { claimTopic(ctx); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (exception instanceof PubSubException.NoTopicOwnerInfoException || exception instanceof PubSubException.BadVersionException) { // some one has updated the owner logger.info("{} : Some one has claimed topic {} 's owner. Try to read the owner again.", new Object[] { addr, topic.toStringUtf8() }); read(); return; } cb.operationFailed(ctx, new PubSubException.ServiceDownException( "Exception when writing owner info to claim ownership of topic " + topic.toStringUtf8() + " : " + exception.getMessage())); } }, ctx); } protected void claimTopic(Object ctx) { logger.info("{} : claimed topic {} 's owner to be {}", new Object[] { addr, topic.toStringUtf8(), myHubInfo }); notifyListenersAndAddToOwnedTopics(topic, cb, ctx); hubManager.uploadSelfLoadData(myHubLoad.setNumTopics(topics.size())); } public void choose(final Version prevOwnerVersion) { hubManager.chooseLeastLoadedHub(new Callback() { @Override public void operationFinished(Object ctx, HubInfo owner) { logger.info("{} : Least loaded owner {} is chosen for topic {}", new Object[] { addr, owner, topic.toStringUtf8() }); if (owner.getAddress().equals(addr)) { claim(prevOwnerVersion); } else { setOwner(owner, prevOwnerVersion); } } @Override public void operationFailed(Object ctx, PubSubException pse) { logger.error("Failed to choose least loaded hub server for topic " + topic.toStringUtf8() + " : ", pse); cb.operationFailed(ctx, pse); } }, null); } public void setOwner(final HubInfo ownerHubInfo, final Version prevOwnerVersion) { logger.info("{} : setting topic {} 's owner to be {}", new Object[] { addr, topic.toStringUtf8(), ownerHubInfo }); mm.writeOwnerInfo(topic, ownerHubInfo, prevOwnerVersion, new Callback() { @Override public void operationFinished(Object ctx, Version newVersion) { logger.info("{} : Set topic {} 's owner to be {}", new Object[] { addr, topic.toStringUtf8(), ownerHubInfo }); cb.operationFinished(ctx, ownerHubInfo.getAddress()); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (exception instanceof PubSubException.NoTopicOwnerInfoException || exception instanceof PubSubException.BadVersionException) { // some one has updated the owner logger.info("{} : Some one has set topic {} 's owner. Try to read the owner again.", new Object[] { addr, topic.toStringUtf8() }); read(); return; } cb.operationFailed(ctx, new PubSubException.ServiceDownException( "Exception when writing owner info to claim ownership of topic " + topic.toStringUtf8() + " : " + exception.getMessage())); } }, ctx); } } @Override protected void postReleaseCleanup(final ByteString topic, final Callback cb, final Object ctx) { mm.readOwnerInfo(topic, new Callback>() { @Override public void operationFinished(Object ctx, Versioned owner) { if (null == owner) { // Node has somehow disappeared from under us, live with it logger.warn("No owner info found when cleaning up topic " + topic.toStringUtf8()); cb.operationFinished(ctx, null); return; } // no valid hub info found, just return if (null == owner.getValue()) { logger.warn("No valid owner info found when cleaning up topic " + topic.toStringUtf8()); cb.operationFinished(ctx, null); return; } HedwigSocketAddress ownerAddr = owner.getValue().getAddress(); if (!ownerAddr.equals(addr)) { logger.warn("Wanted to clean up self owner info for topic " + topic.toStringUtf8() + " but owner " + owner + " found, leaving untouched"); // Not our node, someone else's, leave it alone cb.operationFinished(ctx, null); return; } mm.deleteOwnerInfo(topic, owner.getVersion(), new Callback() { @Override public void operationFinished(Object ctx, Void result) { cb.operationFinished(ctx, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (exception instanceof PubSubException.NoTopicOwnerInfoException) { logger.warn("Wanted to clean up self owner info for topic " + topic.toStringUtf8() + " but it has been removed."); cb.operationFinished(ctx, null); return; } logger.error("Exception when deleting self-ownership metadata for topic " + topic.toStringUtf8() + " : ", exception); cb.operationFailed(ctx, new PubSubException.ServiceDownException(exception)); } }, ctx); } @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("Exception when cleaning up owner info of topic " + topic.toStringUtf8() + " : ", exception); cb.operationFailed(ctx, new PubSubException.ServiceDownException(exception)); } }, ctx); } @Override public void stop() { // we just unregister it with zookeeper to make it unavailable from hub servers list try { hubManager.unregisterSelf(); } catch (IOException e) { logger.error("Error unregistering hub server " + myHubInfo + " : ", e); } super.stop(); } } TopicManager.java000066400000000000000000000063111244507361200337240ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.HedwigSocketAddress; /** * An implementor of this interface is basically responsible for ensuring that * there is at most a single host responsible for a given topic at a given time. * Also, it is desirable that on a host failure, some other hosts in the cluster * claim responsibilities for the topics that were at the failed host. On * claiming responsibility for a topic, a host should call its * {@link TopicOwnershipChangeListener}. * */ public interface TopicManager { /** * Get the name of the host responsible for the given topic. * * @param topic * The topic whose owner to get. * @param cb * Callback. * @return The name of host responsible for the given topic * @throws ServiceDownException * If there is an error looking up the information */ public void getOwner(ByteString topic, boolean shouldClaim, Callback cb, Object ctx); /** * Whenever the topic manager finds out that the set of topics owned by this * node has changed, it can notify a set of * {@link TopicOwnershipChangeListener} objects. Any component of the system * (e.g., the {@link PersistenceManager}) can listen for such changes by * implementing the {@link TopicOwnershipChangeListener} interface and * registering themselves with the {@link TopicManager} using this method. * It is important that the {@link TopicOwnershipChangeListener} reacts * immediately to such notifications, and with no blocking (because multiple * listeners might need to be informed and they are all informed by the same * thread). * * @param listener */ public void addTopicOwnershipChangeListener(TopicOwnershipChangeListener listener); /** * Give up ownership of a topic. If I don't own it, do nothing. * * @throws ServiceDownException * If there is an error in claiming responsibility for the topic */ public void releaseTopic(ByteString topic, Callback cb, Object ctx); /** * Stop topic manager */ public void stop(); } TopicOwnershipChangeListener.java000066400000000000000000000021301244507361200371370ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import com.google.protobuf.ByteString; import org.apache.hedwig.util.Callback; public interface TopicOwnershipChangeListener { public void acquiredTopic(ByteString topic, Callback callback, Object ctx); public void lostTopic(ByteString topic); } TrivialOwnAllTopicManager.java000066400000000000000000000036711244507361200364020ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import java.net.UnknownHostException; import java.util.concurrent.ScheduledExecutorService; import com.google.protobuf.ByteString; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.HedwigSocketAddress; public class TrivialOwnAllTopicManager extends AbstractTopicManager { public TrivialOwnAllTopicManager(ServerConfiguration cfg, ScheduledExecutorService scheduler) throws UnknownHostException { super(cfg, scheduler); } @Override protected void realGetOwner(ByteString topic, boolean shouldClaim, Callback cb, Object ctx) { if (topics.contains(topic)) { cb.operationFinished(ctx, addr); return; } notifyListenersAndAddToOwnedTopics(topic, cb, ctx); } @Override protected void postReleaseCleanup(ByteString topic, Callback cb, Object ctx) { // No cleanup to do cb.operationFinished(ctx, null); } @Override public void stop() { // do nothing } } ZkHubServerManager.java000066400000000000000000000317611244507361200350670ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import java.io.IOException; import java.util.List; import java.util.Random; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.hedwig.zookeeper.SafeAsyncZKCallback; import org.apache.hedwig.zookeeper.SafeAsyncZKCallback.StatCallback; import org.apache.hedwig.zookeeper.ZkUtils; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.data.Stat; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * ZooKeeper based hub server manager. */ class ZkHubServerManager implements HubServerManager { static Logger logger = LoggerFactory.getLogger(ZkHubServerManager.class); final Random rand = new Random(); private final ServerConfiguration conf; private final ZooKeeper zk; private final HedwigSocketAddress addr; private final String ephemeralNodePath; private final String hubNodesPath; // hub info structure represent itself protected HubInfo myHubInfo; protected volatile boolean isSuspended = false; protected ManagerListener listener = null; // upload hub server load to zookeeper StatCallback loadReportingStatCallback = new StatCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, Stat stat) { if (rc != KeeperException.Code.OK.intValue()) { logger.warn("Failed to update load information of hub {} in zk", myHubInfo); } } }; /** * Watcher to monitor available hub server list. */ class ZkHubsWatcher implements Watcher { @Override public void process(WatchedEvent event) { if (event.getType().equals(Watcher.Event.EventType.None)) { if (event.getState().equals( Watcher.Event.KeeperState.Disconnected)) { logger.warn("ZK client has been disconnected to the ZK server!"); isSuspended = true; if (null != listener) { listener.onSuspend(); } } else if (event.getState().equals( Watcher.Event.KeeperState.SyncConnected)) { if (isSuspended) { logger.info("ZK client has been reconnected to the ZK server!"); } isSuspended = false; if (null != listener) { listener.onResume(); } } } if (event.getState().equals(Watcher.Event.KeeperState.Expired)) { logger.error("ZK client connection to the ZK server has expired.!"); if (null != listener) { listener.onShutdown(); } } } } public ZkHubServerManager(ServerConfiguration conf, ZooKeeper zk, HedwigSocketAddress addr) { this.conf = conf; this.zk = zk; this.addr = addr; // znode path to store all available hub servers this.hubNodesPath = this.conf.getZkHostsPrefix(new StringBuilder()).toString(); // the node's ephemeral node path this.ephemeralNodePath = getHubZkNodePath(addr); // register available hub servers list watcher zk.register(new ZkHubsWatcher()); } @Override public void registerListener(ManagerListener listener) { this.listener = listener; } /** * Get the znode path identifying a hub server. * * @param node * Hub Server Address * @return znode path identifying the hub server. */ private String getHubZkNodePath(HedwigSocketAddress node) { String nodePath = this.conf.getZkHostsPrefix(new StringBuilder()) .append("/").append(node).toString(); return nodePath; } @Override public void registerSelf(final HubLoad selfData, final Callback callback, Object ctx) { byte[] loadDataBytes = selfData.toString().getBytes(); ZkUtils.createFullPathOptimistic(zk, ephemeralNodePath, loadDataBytes, Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL, new SafeAsyncZKCallback.StringCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, String name) { if (rc == Code.OK.intValue()) { // now we are here zk.exists(ephemeralNodePath, false, new SafeAsyncZKCallback.StatCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, Stat stat) { if (rc == Code.OK.intValue()) { myHubInfo = new HubInfo(addr, stat.getCzxid()); callback.operationFinished(ctx, myHubInfo); return; } else { callback.operationFailed(ctx, new PubSubException.ServiceDownException( "I can't state my hub node after I created it : " + ephemeralNodePath)); return; } } }, ctx); return; } if (rc != Code.NODEEXISTS.intValue()) { KeeperException ke = ZkUtils .logErrorAndCreateZKException( "Could not create ephemeral node to register hub", ephemeralNodePath, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(ke)); return; } logger.info("Found stale ephemeral node while registering hub with ZK, deleting it"); // Node exists, lets try to delete it and retry zk.delete(ephemeralNodePath, -1, new SafeAsyncZKCallback.VoidCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx) { if (rc == Code.OK.intValue() || rc == Code.NONODE.intValue()) { registerSelf(selfData, callback, ctx); return; } KeeperException ke = ZkUtils.logErrorAndCreateZKException( "Could not delete stale ephemeral node to register hub", ephemeralNodePath, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(ke)); return; } }, ctx); } }, ctx); } @Override public void unregisterSelf() throws IOException { try { zk.delete(ephemeralNodePath, -1); } catch (InterruptedException e) { throw new IOException(e); } catch (KeeperException e) { throw new IOException(e); } } @Override public void uploadSelfLoadData(HubLoad selfLoad) { logger.debug("Reporting hub load of {} : {}", myHubInfo, selfLoad); byte[] loadDataBytes = selfLoad.toString().getBytes(); zk.setData(ephemeralNodePath, loadDataBytes, -1, loadReportingStatCallback, null); } @Override public void isHubAlive(final HubInfo hub, final Callback callback, Object ctx) { zk.exists(getHubZkNodePath(hub.getAddress()), false, new SafeAsyncZKCallback.StatCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, Stat stat) { if (rc == Code.NONODE.intValue()) { callback.operationFinished(ctx, false); } else if (rc == Code.OK.intValue()) { if (hub.getZxid() == stat.getCzxid()) { callback.operationFinished(ctx, true); } else { callback.operationFinished(ctx, false); } } else { callback.operationFailed(ctx, new PubSubException.ServiceDownException( "Failed to check whether hub server " + hub + " is alive!")); } } }, ctx); } @Override public void chooseLeastLoadedHub(final Callback callback, Object ctx) { // Get the list of existing hosts zk.getChildren(hubNodesPath, false, new SafeAsyncZKCallback.ChildrenCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, List children) { if (rc != Code.OK.intValue()) { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Could not get list of available hubs", path, rc); callback.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } chooseLeastLoadedNode(children, callback, ctx); } }, ctx); } private void chooseLeastLoadedNode(final List children, final Callback callback, Object ctx) { SafeAsyncZKCallback.DataCallback dataCallback = new SafeAsyncZKCallback.DataCallback() { int numResponses = 0; HubLoad minLoad = HubLoad.MAX_LOAD; String leastLoaded = null; long leastLoadedCzxid = 0; @Override public void safeProcessResult(int rc, String path, Object ctx, byte[] data, Stat stat) { synchronized (this) { if (rc == KeeperException.Code.OK.intValue()) { try { HubLoad load = HubLoad.parse(new String(data)); logger.debug("Found server {} with load: {}", ctx, load); int compareRes = load.compareTo(minLoad); if (compareRes < 0 || (compareRes == 0 && rand.nextBoolean())) { minLoad = load; leastLoaded = (String) ctx; leastLoadedCzxid = stat.getCzxid(); } } catch (HubLoad.InvalidHubLoadException e) { logger.warn("Corrupted load information from hub : " + ctx); // some corrupted data, we'll just ignore this hub } } numResponses++; if (numResponses == children.size()) { if (leastLoaded == null) { callback.operationFailed(ctx, new PubSubException.ServiceDownException("No hub available")); return; } try { HedwigSocketAddress owner = new HedwigSocketAddress(leastLoaded); callback.operationFinished(ctx, new HubInfo(owner, leastLoadedCzxid)); } catch (Throwable t) { callback.operationFailed(ctx, new PubSubException.ServiceDownException("Least loaded hub server " + leastLoaded + " is invalid.")); } } } } }; for (String child : children) { zk.getData(conf.getZkHostsPrefix(new StringBuilder()).append("/").append(child).toString(), false, dataCallback, child); } } } ZkTopicManager.java000066400000000000000000000345211244507361200342350ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import java.net.UnknownHostException; import java.io.IOException; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.SynchronousQueue; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.data.Stat; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Either; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.hedwig.zookeeper.SafeAsyncZKCallback; import org.apache.hedwig.zookeeper.ZkUtils; import org.apache.hedwig.zookeeper.SafeAsyncZKCallback.DataCallback; import org.apache.hedwig.zookeeper.SafeAsyncZKCallback.StatCallback; /** * Topics are operated on in parallel as they are independent. * */ public class ZkTopicManager extends AbstractTopicManager implements TopicManager { static Logger logger = LoggerFactory.getLogger(ZkTopicManager.class); /** * Persistent storage for topic metadata. */ private ZooKeeper zk; // hub server manager private final HubServerManager hubManager; private final HubInfo myHubInfo; private final HubLoad myHubLoad; // Boolean flag indicating if we should suspend activity. If this is true, // all of the Ops put into the queuer will fail automatically. protected volatile boolean isSuspended = false; /** * Create a new topic manager. Pass in an active ZooKeeper client object. * * @param zk */ public ZkTopicManager(final ZooKeeper zk, final ServerConfiguration cfg, ScheduledExecutorService scheduler) throws UnknownHostException, PubSubException { super(cfg, scheduler); this.zk = zk; this.hubManager = new ZkHubServerManager(cfg, zk, addr); myHubLoad = new HubLoad(topics.size()); this.hubManager.registerListener(new HubServerManager.ManagerListener() { @Override public void onSuspend() { isSuspended = true; } @Override public void onResume() { isSuspended = false; } @Override public void onShutdown() { // if hub server manager can't work, we had to quit Runtime.getRuntime().exit(1); } }); final SynchronousQueue> queue = new SynchronousQueue>(); this.hubManager.registerSelf(myHubLoad, new Callback() { @Override public void operationFinished(final Object ctx, final HubInfo resultOfOperation) { logger.info("Successfully registered hub {} with zookeeper", resultOfOperation); ConcurrencyUtils.put(queue, Either.of(resultOfOperation, (PubSubException) null)); } @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("Failed to register hub with zookeeper", exception); ConcurrencyUtils.put(queue, Either.of((HubInfo)null, exception)); } }, null); Either result = ConcurrencyUtils.take(queue); PubSubException pse = result.right(); if (pse != null) { throw pse; } myHubInfo = result.left(); } String hubPath(ByteString topic) { return cfg.getZkTopicPath(new StringBuilder(), topic).append("/hub").toString(); } @Override protected void realGetOwner(final ByteString topic, final boolean shouldClaim, final Callback cb, final Object ctx) { // If operations are suspended due to a ZK client disconnect, just error // out this call and return. if (isSuspended) { cb.operationFailed(ctx, new PubSubException.ServiceDownException( "ZKTopicManager service is temporarily suspended!")); return; } if (topics.contains(topic)) { cb.operationFinished(ctx, addr); return; } new ZkGetOwnerOp(topic, shouldClaim, cb, ctx).read(); } // Recursively call each other. class ZkGetOwnerOp { ByteString topic; boolean shouldClaim; Callback cb; Object ctx; String hubPath; public ZkGetOwnerOp(ByteString topic, boolean shouldClaim, Callback cb, Object ctx) { this.topic = topic; this.shouldClaim = shouldClaim; this.cb = cb; this.ctx = ctx; hubPath = hubPath(topic); } public void choose() { hubManager.chooseLeastLoadedHub(new Callback() { @Override public void operationFinished(Object ctx, HubInfo owner) { logger.info("{} : Least loaded owner {} is chosen for topic {}", new Object[] { addr, owner, topic.toStringUtf8() }); if (owner.getAddress().equals(addr)) { claim(); } else { cb.operationFinished(ZkGetOwnerOp.this.ctx, owner.getAddress()); } } @Override public void operationFailed(Object ctx, PubSubException pse) { logger.error("Failed to choose least loaded hub server for topic " + topic.toStringUtf8() + " : ", pse); cb.operationFailed(ctx, pse); } }, null); } public void claimOrChoose() { if (shouldClaim) claim(); else choose(); } public void read() { zk.getData(hubPath, false, new SafeAsyncZKCallback.DataCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, byte[] data, Stat stat) { if (rc == Code.NONODE.intValue()) { claimOrChoose(); return; } if (rc != Code.OK.intValue()) { KeeperException e = ZkUtils.logErrorAndCreateZKException("Could not read ownership for topic: " + topic.toStringUtf8(), path, rc); cb.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } // successfully did a read try { HubInfo ownerHubInfo = HubInfo.parse(new String(data)); HedwigSocketAddress owner = ownerHubInfo.getAddress(); if (!owner.equals(addr)) { if (logger.isDebugEnabled()) { logger.debug("topic: " + topic.toStringUtf8() + " belongs to someone else: " + owner); } cb.operationFinished(ctx, owner); return; } logger.info("Discovered stale self-node for topic: " + topic.toStringUtf8() + ", will delete it"); } catch (HubInfo.InvalidHubInfoException ihie) { logger.info("Discovered invalid hub info for topic: " + topic.toStringUtf8() + ", will delete it : ", ihie); } // we must have previously failed and left a // residual ephemeral node here, so we must // delete it (clean it up) and then // re-create/re-acquire the topic. zk.delete(hubPath, stat.getVersion(), new VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { if (Code.OK.intValue() == rc || Code.NONODE.intValue() == rc) { claimOrChoose(); } else { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Could not delete self node for topic: " + topic.toStringUtf8(), path, rc); cb.operationFailed(ctx, new PubSubException.ServiceDownException(e)); } } }, ctx); } }, ctx); } public void claim() { if (logger.isDebugEnabled()) { logger.debug("claiming topic: " + topic.toStringUtf8()); } ZkUtils.createFullPathOptimistic(zk, hubPath, myHubInfo.toString().getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL, new SafeAsyncZKCallback.StringCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, String name) { if (rc == Code.OK.intValue()) { if (logger.isDebugEnabled()) { logger.debug("claimed topic: " + topic.toStringUtf8()); } notifyListenersAndAddToOwnedTopics(topic, cb, ctx); hubManager.uploadSelfLoadData(myHubLoad.setNumTopics(topics.size())); } else if (rc == Code.NODEEXISTS.intValue()) { read(); } else { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Failed to create ephemeral node to claim ownership of topic: " + topic.toStringUtf8(), path, rc); cb.operationFailed(ctx, new PubSubException.ServiceDownException(e)); } } }, ctx); } } @Override protected void postReleaseCleanup(final ByteString topic, final Callback cb, Object ctx) { zk.getData(hubPath(topic), false, new SafeAsyncZKCallback.DataCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, byte[] data, Stat stat) { if (rc == Code.NONODE.intValue()) { // Node has somehow disappeared from under us, live with it // since its a transient node logger.warn("While deleting self-node for topic: " + topic.toStringUtf8() + ", node not found"); cb.operationFinished(ctx, null); return; } if (rc != Code.OK.intValue()) { KeeperException e = ZkUtils.logErrorAndCreateZKException( "Failed to delete self-ownership node for topic: " + topic.toStringUtf8(), path, rc); cb.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } String hubInfoStr = new String(data); try { HubInfo ownerHubInfo = HubInfo.parse(hubInfoStr); HedwigSocketAddress owner = ownerHubInfo.getAddress(); if (!owner.equals(addr)) { logger.warn("Wanted to delete self-node for topic: " + topic.toStringUtf8() + " but node for " + owner + " found, leaving untouched"); // Not our node, someone else's, leave it alone cb.operationFinished(ctx, null); return; } } catch (HubInfo.InvalidHubInfoException ihie) { logger.info("Invalid hub info " + hubInfoStr + " found when release topic " + topic.toStringUtf8() + ". Leaving untouched until next acquire action."); cb.operationFinished(ctx, null); return; } zk.delete(path, stat.getVersion(), new SafeAsyncZKCallback.VoidCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx) { if (rc != Code.OK.intValue() && rc != Code.NONODE.intValue()) { KeeperException e = ZkUtils .logErrorAndCreateZKException("Failed to delete self-ownership node for topic: " + topic.toStringUtf8(), path, rc); cb.operationFailed(ctx, new PubSubException.ServiceDownException(e)); return; } cb.operationFinished(ctx, null); } }, ctx); } }, ctx); } @Override public void stop() { // we just unregister it with zookeeper to make it unavailable from hub servers list try { hubManager.unregisterSelf(); } catch (IOException e) { logger.error("Error unregistering hub server :", e); } super.stop(); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/zookeeper/000077500000000000000000000000001244507361200277625ustar00rootroot00000000000000SafeAsynBKCallback.java000066400000000000000000000066411244507361200341200ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/zookeeper/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.zookeeper; import java.util.Enumeration; import org.apache.bookkeeper.client.AsyncCallback; import org.apache.bookkeeper.client.LedgerEntry; import org.apache.bookkeeper.client.LedgerHandle; public class SafeAsynBKCallback extends SafeAsyncCallback { public static abstract class OpenCallback implements AsyncCallback.OpenCallback { @Override public void openComplete(int rc, LedgerHandle ledgerHandle, Object ctx) { try { safeOpenComplete(rc, ledgerHandle, ctx); } catch(Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeOpenComplete(int rc, LedgerHandle ledgerHandle, Object ctx); } public static abstract class CloseCallback implements AsyncCallback.CloseCallback { @Override public void closeComplete(int rc, LedgerHandle ledgerHandle, Object ctx) { try { safeCloseComplete(rc, ledgerHandle, ctx); } catch(Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeCloseComplete(int rc, LedgerHandle ledgerHandle, Object ctx) ; } public static abstract class ReadCallback implements AsyncCallback.ReadCallback { @Override public void readComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx) { try { safeReadComplete(rc, lh, seq, ctx); } catch(Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeReadComplete(int rc, LedgerHandle lh, Enumeration seq, Object ctx); } public static abstract class CreateCallback implements AsyncCallback.CreateCallback { @Override public void createComplete(int rc, LedgerHandle lh, Object ctx) { try { safeCreateComplete(rc, lh, ctx); } catch(Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeCreateComplete(int rc, LedgerHandle lh, Object ctx); } public static abstract class AddCallback implements AsyncCallback.AddCallback { @Override public void addComplete(int rc, LedgerHandle lh, long entryId, Object ctx) { try { safeAddComplete(rc, lh, entryId, ctx); } catch(Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeAddComplete(int rc, LedgerHandle lh, long entryId, Object ctx); } } SafeAsyncCallback.java000066400000000000000000000026761244507361200340520ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/zookeeper/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.zookeeper; import java.lang.Thread.UncaughtExceptionHandler; import org.apache.hedwig.server.common.TerminateJVMExceptionHandler; public class SafeAsyncCallback { static UncaughtExceptionHandler uncaughtExceptionHandler = new TerminateJVMExceptionHandler(); public static void setUncaughtExceptionHandler(UncaughtExceptionHandler uncaughtExceptionHandler) { SafeAsyncCallback.uncaughtExceptionHandler = uncaughtExceptionHandler; } static void invokeUncaughtExceptionHandler(Throwable t) { Thread thread = Thread.currentThread(); uncaughtExceptionHandler.uncaughtException(thread, t); } } SafeAsyncZKCallback.java000066400000000000000000000073771244507361200343220ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/zookeeper/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.zookeeper; import java.util.List; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.data.ACL; import org.apache.zookeeper.data.Stat; public class SafeAsyncZKCallback extends SafeAsyncCallback { public static abstract class StatCallback implements AsyncCallback.StatCallback { public void processResult(int rc, String path, Object ctx, Stat stat) { try { safeProcessResult(rc, path, ctx, stat); } catch (Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeProcessResult(int rc, String path, Object ctx, Stat stat); } public static abstract class DataCallback implements AsyncCallback.DataCallback { public void processResult(int rc, String path, Object ctx, byte data[], Stat stat) { try { safeProcessResult(rc, path, ctx, data, stat); } catch (Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeProcessResult(int rc, String path, Object ctx, byte data[], Stat stat); } public static abstract class ACLCallback implements AsyncCallback.ACLCallback { public void processResult(int rc, String path, Object ctx, List acl, Stat stat) { try { safeProcessResult(rc, path, ctx, acl, stat); } catch (Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeProcessResult(int rc, String path, Object ctx, List acl, Stat stat); } public static abstract class ChildrenCallback implements AsyncCallback.ChildrenCallback { public void processResult(int rc, String path, Object ctx, List children) { try { safeProcessResult(rc, path, ctx, children); } catch (Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeProcessResult(int rc, String path, Object ctx, List children); } public static abstract class StringCallback implements AsyncCallback.StringCallback { public void processResult(int rc, String path, Object ctx, String name) { try { safeProcessResult(rc, path, ctx, name); } catch (Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeProcessResult(int rc, String path, Object ctx, String name); } public static abstract class VoidCallback implements AsyncCallback.VoidCallback { public void processResult(int rc, String path, Object ctx) { try { safeProcessResult(rc, path, ctx); } catch (Throwable t) { invokeUncaughtExceptionHandler(t); } } public abstract void safeProcessResult(int rc, String path, Object ctx); } } bookkeeper-release-4.2.4/hedwig-server/src/main/java/org/apache/hedwig/zookeeper/ZkUtils.java000066400000000000000000000107151244507361200322360ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.zookeeper; import java.io.IOException; import java.util.List; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.data.ACL; import org.apache.hedwig.util.PathUtils; public class ZkUtils { static Logger logger = LoggerFactory.getLogger(ZkUtils.class); static class SyncObject { int rc; String path; boolean called = false; } public static void createFullPathOptimistic(final ZooKeeper zk, final String originalPath, final byte[] data, final List acl, final CreateMode createMode) throws KeeperException, IOException, InterruptedException { final SyncObject syncObj = new SyncObject(); createFullPathOptimistic( zk, originalPath, data, acl, createMode, new SafeAsyncZKCallback.StringCallback() { @Override public void safeProcessResult(final int rc, String path, Object ctx, String name) { synchronized (syncObj) { syncObj.rc = rc; syncObj.path = path; syncObj.called = true; syncObj.notify(); } } }, syncObj ); synchronized (syncObj) { while (!syncObj.called) { syncObj.wait(); } } if (Code.OK.intValue() != syncObj.rc) { throw KeeperException.create(syncObj.rc, syncObj.path); } } public static void createFullPathOptimistic(final ZooKeeper zk, final String originalPath, final byte[] data, final List acl, final CreateMode createMode, final AsyncCallback.StringCallback callback, final Object ctx) { zk.create(originalPath, data, acl, createMode, new SafeAsyncZKCallback.StringCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, String name) { if (rc != Code.NONODE.intValue()) { callback.processResult(rc, path, ctx, name); return; } // Since I got a nonode, it means that my parents don't exist // create mode is persistent since ephemeral nodes can't be // parents ZkUtils.createFullPathOptimistic(zk, PathUtils.parent(originalPath), new byte[0], acl, CreateMode.PERSISTENT, new SafeAsyncZKCallback.StringCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, String name) { if (rc == Code.OK.intValue() || rc == Code.NODEEXISTS.intValue()) { // succeeded in creating the parent, now // create the original path ZkUtils.createFullPathOptimistic(zk, originalPath, data, acl, createMode, callback, ctx); } else { callback.processResult(rc, path, ctx, name); } } }, ctx); } }, ctx); } public static KeeperException logErrorAndCreateZKException(String msg, String path, int rc) { KeeperException ke = KeeperException.create(Code.get(rc), path); logger.error(msg + ",zkPath: " + path, ke); return ke; } } bookkeeper-release-4.2.4/hedwig-server/src/main/resources/000077500000000000000000000000001244507361200235715ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/main/resources/LICENSE.bin.txt000066400000000000000000000372731244507361200261770ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ------------------------------------------------------------------------------------ For lib/slf4j-*.jar Copyright (c) 2004-2011 QOS.ch All rights reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ------------------------------------------------------------------------------------ For lib/protobuf-java-*.jar Copyright 2008, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Code generated by the Protocol Buffer compiler is owned by the owner of the input file used when generating it. This code is not standalone and requires a support library to be linked with it. This support library is itself covered by the above license. ------------------------------------------------------------------------------------ For lib/jline-*.jar Copyright (c) 2002-2006, Marc Prud'hommeaux All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of JLine nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. bookkeeper-release-4.2.4/hedwig-server/src/main/resources/NOTICE.bin.txt000066400000000000000000000035031244507361200260630ustar00rootroot00000000000000Apache BookKeeper Copyright 2011-2014 The Apache Software Foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This project includes: Apache Log4j under The Apache Software License, Version 2.0 BookKeeper Server under The Apache Software License, Version 2.0 Commons BeanUtils Core under The Apache Software License, Version 2.0 Commons CLI under The Apache Software License, Version 2.0 Commons Codec under The Apache Software License, Version 2.0 Commons Collections under The Apache Software License, Version 2.0 Commons Configuration under The Apache Software License, Version 2.0 Commons IO under The Apache Software License, Version 2.0 Commons Lang under The Apache Software License, Version 2.0 Commons Logging under The Apache Software License, Version 2.0 commons-beanutils under Apache License, Version 2.0 Derby Engine under Apache License, Version 2.0 Digester under The Apache Software License, Version 2.0 Hedwig Client under The Apache Software License, Version 2.0 Hedwig Protocol under The Apache Software License, Version 2.0 JLine under BSD Protocol Buffer Java API under New BSD license SLF4J API Module under MIT License SLF4J LOG4J-12 Binding under MIT License The Netty Project under Apache License, Version 2.0 ZooKeeper under Apache License, Version 2.0 Guava under The Apache Software License, Version 2.0 bookkeeper-release-4.2.4/hedwig-server/src/main/resources/findbugsExclude.xml000066400000000000000000000021321244507361200274240ustar00rootroot00000000000000 bookkeeper-release-4.2.4/hedwig-server/src/main/resources/p12.pass000066400000000000000000000000161244507361200250600ustar00rootroot00000000000000eUySvp2phM2Wk bookkeeper-release-4.2.4/hedwig-server/src/main/resources/server.p12000066400000000000000000000075251244507361200254340ustar00rootroot000000000000000‚Q0‚ *†H†÷  ‚‚0‚0‚7 *†H†÷  ‚(0‚$0‚ *†H†÷ 0 *†H†÷  0‡ n"ö耂ðÊÏÿ–‡üfÌ“E wŸÊÝ‹ˆ¤¼Z ÷Taϼ›JôuÎÐIQÿŒŸlÂ;ƒ„ßÖ~²?j0‘0ä6Ÿlêè¢íC½Ì8SØT9ÜèYx ªŒœ*ñ-!,òÅ­‹ô† Ô ÿd­t)KXÏIඉ’ÃÛpŒ[ôh(Ë[F;±Ë*g.¤ S—ÿPcË:Ë?5 fGÑKÝiá~èì- =Jõ Y¶ÉÄ­BçG¤¤Mý •{á[üs3%ÀsæfD£Àõ•˼麪(ÍÞ÷jÖ¡šÓk”Ü&z95ò¾ÞpY‰Øe·êÝXÍøhÊc [Ý Â2ÔÛÈÿ¼nÝ \Ç´ö‹Q¬ÐÜF`šüÒ$ ’)b“¡˜ä:eÝ YBÓ74’육5$cc WW$UÌÇÀ å¹'Š´Øú¢r¢Ò¨¼qÊeøäV|„S>R“}ð<²áxœDô XƒÔH>2_x¾®¤RE²qÚš¥á pyzö£fwMÙÚá wHô“Gö˜"B5g¹†¥Yx5Î+-¶<%<{À†bwt]L9N 6è"½kƼtéWŽnqðÉÄ'æ6jŽ|ÅÂÕ5Ð8D·Ïê&4Ç|Fmn"c8CõþËX÷mþŸ¶ÛQr„0M>Ô À ªòÁ—ŸÈ¼Hµ©F ·W>uv½Sþ³ßgBag22Êá­¬õlx;Œ.–åÏÕâ‹LïMtç‚GÀ`ųo·?ãÉ‘f3ÈPP&YjÔg£âLÐ1_˜h½nϪ :}mkþZþLúLÊ'à 该à|§Ë‡¬ÐsÙn*w͈®9äý Š?ÁLL u®cEP;ÛÍ–êb”‰B­½Ï ¹É Gºè¨ˆ;1È3€KtRóJÑf´Í¬Ô¤Í/ÏUÊn¿v÷+˜s<=°ySj>-:Ôö-"¨+í‹ôäé/è@„~ƒÊ}L\Ð@1Ì~-†ÐÑ$¯(èž  b{‰2gãh“J*Hw"£ïÓJú8÷üÃãà Õ›©«#HÛ¢ˆ.áX¸™PÍ{»>%ôÏ·zì~£â2­ÑÓW¸|ØÛfÞK·þqmìI¨¾$¬}Ÿ1Þ»vÝ®Eÿ\zÙ8ëÔ3];“¡_J?d»àeúpÈ•,u¼›^x€—¡ÙoEgf§%X(ŠÜÍ6ðr™oŠÓ×…ø×xί!ÓÀî½ÉMmÍÞ¿Ì:¸²@5£1ëÊ+yÔ6„ Ù×8}`¯³”E\“$&íióõ HÒ8gwõq¡s”Ñ›®F Z'‚— s\³ø'Ó¥f|N„§ùâQ#G¾_p!«¤tÁdl—_€ð³ ºl“Æ»JÄh Y¯¹*ÎJO´­¯|àElLˆF¹³–Ä“e,ѵ€h½ÙšævÐÜÓEª&yý„½І/ö$!Ž‚|jîTû¤[Rŵ q.œ+— ƒo’­MÐu਷‘úú?»_©=߯^å~f9ýíÝÕâw]Ìæg VÄs¦ó¢/驼0#Ò¦•Šãæ—Uó"3âØdeûèô†â‡ŒC}¤:Û þÈžÅÚqR0ÃB[è0‚ Á *†H†÷  ‚ ²‚ ®0‚ ª0‚ ¦ *†H†÷   ‚ n0‚ j0 *†H†÷  0EÌõPR\?Õ‚ H¾oëw†-Cu-JÇå146„D$ŽW÷Ü2ÅÒ%D¢Zhw™°q~Sð@ӎ✿ ¨pÙ¿±ÈGÁRXôD¿º+§W.© ýÃ7@»bä~Njq5²òP­ÊÌßÁG£¤œ\˜° Òpe1]ÁU¨b-Š÷)þ:¸%­qöxS*péQ‰‡ Ç'Ü›z´ˆ«êîöïôèGë µäˆ!D?ˆ¥¹Hcã˜J#×'ƒ­ûÅÔ%‰ ¾¤ß+[¬¨“7“‹­ƒ—¡¬¤sªSʾK5Åôãó&†öM‹ö“~Ÿk -)ûùH$æÀƒ_F˜¤£qB œß¨K·Ô-ãs¢/˜61ÐH!+‡ èúïV NéÿWO—¹<#­…iaƒ™ÂGY׬kŒŠåQ d{µóI‘tõf/³¸ò9½%¥uâ7>‘O¯ñ¡c\ŸîD)$¨ýÁJ¡ö8`aÒRˆ¹7özý÷‰ÑG¾h.TמCfŽ"cA¦h[~šuP"pÁÎf&û¼5Æ|íþwì(;Úý aäuÕŽ±M™mJç6á"°R&YJd£6àÈ´Lú:¢BƒGŸÆ,Û9ãŒóö„yQ…ÎïAÓ™HP©y˜ I[m2"@:aYc\oÁ=ߣįÕEb{öÓ…Àÿq<`¹ØR†R´ÊÍ‘°U„MÐe©ìó§‡Í«¨Åt˜òG¶õ5÷Q`½wÍ‚óŒhÝ5™Í 5³SæoPÝÐuä?­uujîèx4»‰P{¡fF¸?::RDZ±(ånIÚœîÚv—>ãëß29ÉÕíf Ó|5Sm ¨çj.zÙ Îè'´{‰ ŠŽ+"h¿ý[€ Æ]Å5êÑA/„§µ™…æi ”…¹ˆ¤ÝmÀ}!<ýr“äV:necb¡=‚ítbäO3I‡ú¨{è-0Õš¨¾&MÜo91r”5lÏ ÊFñµÖq(kEýˆƒ533sæ+8êPÈ Oo …è>¼ÛxZqùƒÇÏ´¼1”,PŽýz¿wø)5Ìxº°s¯oQSUž^á Øç’èæ®±…sô€ÀçÙèŒ/ÞOr))gP®Ly†ÑP×½' }däÇØpD[qÍRÚ䶨m ¡ïôÈ,ð#x‰ŒuvŽ©gY¿VcËevâ˜O,"½˜v`ƒ1%0# *†H†÷  1 \ñ?" LNÆk]T½“ðç_“Þ010!0 +(«ýÁýÏÀ?²-Íø+dÀX€ÃË­džïÚ-bookkeeper-release-4.2.4/hedwig-server/src/test/000077500000000000000000000000001244507361200216125ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/000077500000000000000000000000001244507361200225335ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/000077500000000000000000000000001244507361200233225ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/000077500000000000000000000000001244507361200245435ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/000077500000000000000000000000001244507361200260125ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/HelperMethods.java000066400000000000000000000043221244507361200314210ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig; import java.util.ArrayList; import java.util.List; import java.util.Random; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.Message; public class HelperMethods { static Random rand = new Random(); public static List getRandomPublishedMessages(int numMessages, int size) { ByteString[] regions = { ByteString.copyFromUtf8("sp1"), ByteString.copyFromUtf8("re1"), ByteString.copyFromUtf8("sg") }; return getRandomPublishedMessages(numMessages, size, regions); } public static List getRandomPublishedMessages(int numMessages, int size, ByteString[] regions) { List msgs = new ArrayList(); for (int i = 0; i < numMessages; i++) { byte[] body = new byte[size]; rand.nextBytes(body); msgs.add(Message.newBuilder().setBody(ByteString.copyFrom(body)).setSrcRegion( regions[rand.nextInt(regions.length)]).build()); } return msgs; } public static boolean areEqual(Message m1, Message m2) { if (m1.hasSrcRegion() != m2.hasSrcRegion()) { return false; } if (m1.hasSrcRegion() && !m1.getSrcRegion().equals(m2.getSrcRegion())) { return false; } return m1.getBody().equals(m2.getBody()); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/StubCallback.java000066400000000000000000000034461244507361200312160ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig; import java.util.concurrent.SynchronousQueue; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Either; import org.apache.hedwig.util.Callback; public class StubCallback implements Callback { public SynchronousQueue> queue = new SynchronousQueue>(); public void operationFailed(Object ctx, final PubSubException exception) { new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(queue, Either.of((T) null, exception)); } }).start(); } public void operationFinished(Object ctx, final T resultOfOperation) { new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(queue, Either.of(resultOfOperation, (PubSubException) null)); } }).start(); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/StubScanCallback.java000066400000000000000000000032751244507361200320230ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig; import java.util.ArrayList; import java.util.List; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.server.persistence.ScanCallback; public class StubScanCallback implements ScanCallback { List messages = new ArrayList(); boolean success = false, failed = false; public void messageScanned(Object ctx, Message message) { messages.add(message); success = true; } public void scanFailed(Object ctx, Exception exception) { failed = true; success = false; } public void scanFinished(Object ctx, ReasonForFinish reason) { success = true; failed = false; } public List getMessages() { return messages; } public boolean isSuccess() { return success; } public boolean isFailed() { return failed; } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/client/000077500000000000000000000000001244507361200272705ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/client/TestPubSubClient.java000066400000000000000000000724201244507361200333370ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client; import java.util.Arrays; import java.util.Collection; import java.util.HashMap; import java.util.Map; import java.util.concurrent.CountDownLatch; import java.util.concurrent.SynchronousQueue; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.PublishResponse; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.server.PubSubServerStandAloneTestBase; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.SubscriptionListener; import org.apache.hedwig.util.HedwigSocketAddress; @RunWith(Parameterized.class) public class TestPubSubClient extends PubSubServerStandAloneTestBase { private static final int RETENTION_SECS_VALUE = 10; // Client side variables protected HedwigClient client; protected Publisher publisher; protected Subscriber subscriber; protected class RetentionServerConfiguration extends StandAloneServerConfiguration { @Override public boolean isStandalone() { return true; } @Override public int getRetentionSecs() { return RETENTION_SECS_VALUE; } } // SynchronousQueues to verify async calls private final SynchronousQueue queue = new SynchronousQueue(); private final SynchronousQueue consumeQueue = new SynchronousQueue(); private final SynchronousQueue eventQueue = new SynchronousQueue(); class TestSubscriptionListener implements SubscriptionListener { SynchronousQueue eventQueue; public TestSubscriptionListener() { this.eventQueue = TestPubSubClient.this.eventQueue; } public TestSubscriptionListener(SynchronousQueue queue) { this.eventQueue = queue; } @Override public void processEvent(final ByteString topic, final ByteString subscriberId, final SubscriptionEvent event) { new Thread(new Runnable() { @Override public void run() { logger.debug("Event {} received for subscription(topic:{}, subscriber:{})", new Object[] { event, topic.toStringUtf8(), subscriberId.toStringUtf8() }); ConcurrencyUtils.put(TestSubscriptionListener.this.eventQueue, event); } }).start(); } } // Test implementation of Callback for async client actions. class TestCallback implements Callback { @Override public void operationFinished(Object ctx, Void resultOfOperation) { new Thread(new Runnable() { @Override public void run() { if (logger.isDebugEnabled()) logger.debug("Operation finished!"); ConcurrencyUtils.put(queue, true); } }).start(); } @Override public void operationFailed(Object ctx, final PubSubException exception) { new Thread(new Runnable() { @Override public void run() { logger.error("Operation failed!", exception); ConcurrencyUtils.put(queue, false); } }).start(); } } // Test implementation of subscriber's message handler. class TestMessageHandler implements MessageHandler { private final SynchronousQueue consumeQueue; public TestMessageHandler() { this.consumeQueue = TestPubSubClient.this.consumeQueue; } public TestMessageHandler(SynchronousQueue consumeQueue) { this.consumeQueue = consumeQueue; } public void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { new Thread(new Runnable() { @Override public void run() { if (logger.isDebugEnabled()) logger.debug("Consume operation finished successfully!"); ConcurrencyUtils.put(TestMessageHandler.this.consumeQueue, true); } }).start(); callback.operationFinished(context, null); } } @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { true }, { false } }); } protected boolean isSubscriptionChannelSharingEnabled; public TestPubSubClient(boolean isSubscriptionChannelSharingEnabled) { this.isSubscriptionChannelSharingEnabled = isSubscriptionChannelSharingEnabled; } @Override @Before public void setUp() throws Exception { super.setUp(); client = new HedwigClient(new ClientConfiguration() { @Override public HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return getDefaultHedwigAddress(); } @Override public boolean isSubscriptionChannelSharingEnabled() { return TestPubSubClient.this.isSubscriptionChannelSharingEnabled; } }); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } @Override @After public void tearDown() throws Exception { client.close(); super.tearDown(); } @Test(timeout=60000) public void testSyncPublish() throws Exception { boolean publishSuccess = true; try { publisher.publish(ByteString.copyFromUtf8("mySyncTopic"), Message.newBuilder().setBody( ByteString.copyFromUtf8("Hello Sync World!")).build()); } catch (Exception e) { publishSuccess = false; } assertTrue(publishSuccess); } @Test(timeout=60000) public void testSyncPublishWithResponse() throws Exception { ByteString topic = ByteString.copyFromUtf8("testSyncPublishWithResponse"); ByteString subid = ByteString.copyFromUtf8("mysubid"); final String prefix = "SyncMessage-"; final int numMessages = 30; final Map publishedMsgs = new HashMap(); final AtomicInteger numReceived = new AtomicInteger(0); final CountDownLatch receiveLatch = new CountDownLatch(1); final Map receivedMsgs = new HashMap(); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.startDelivery(topic, subid, new MessageHandler() { synchronized public void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { String str = msg.getBody().toStringUtf8(); receivedMsgs.put(str, msg.getMsgId()); if (numMessages == numReceived.incrementAndGet()) { receiveLatch.countDown(); } callback.operationFinished(context, null); } }); for (int i=0; i publishedMsgs = new HashMap(); final AtomicInteger numReceived = new AtomicInteger(0); final CountDownLatch receiveLatch = new CountDownLatch(1); final Map receivedMsgs = new HashMap(); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.startDelivery(topic, subid, new MessageHandler() { synchronized public void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { String str = msg.getBody().toStringUtf8(); receivedMsgs.put(str, msg.getMsgId()); if (numMessages == numReceived.incrementAndGet()) { receiveLatch.countDown(); } callback.operationFinished(context, null); } }); for (int i=0; i() { @Override public void operationFinished(Object ctx, PublishResponse response) { publishedMsgs.put(str, response.getPublishedMsgId()); if (numMessages == numPublished.incrementAndGet()) { publishLatch.countDown(); } } @Override public void operationFailed(Object ctx, final PubSubException exception) { publishLatch.countDown(); } }, null); } assertTrue("Timed out waiting on callback for publish requests.", publishLatch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected " + numMessages + " publishes.", numMessages, numPublished.get()); assertEquals("Should be expected " + numMessages + " publishe responses.", numMessages, publishedMsgs.size()); assertTrue("Timed out waiting on callback for messages.", receiveLatch.await(30, TimeUnit.SECONDS)); assertEquals("Should be expected " + numMessages + " messages.", numMessages, numReceived.get()); assertEquals("Should be expected " + numMessages + " messages in map.", numMessages, receivedMsgs.size()); for (int i=0; i eventQueue2 = new SynchronousQueue(); subscriber2.addSubscriptionListener(new TestSubscriptionListener(eventQueue2)); try { subscriber2.subscribe(topic, subscriberId, options); } catch (PubSubException.ServiceDownException e) { fail("Should not reach here!"); } SynchronousQueue consumeQueue2 = new SynchronousQueue(); subscriber2.startDelivery(topic, subscriberId, new TestMessageHandler(consumeQueue2)); assertEquals(SubscriptionEvent.SUBSCRIPTION_FORCED_CLOSED, eventQueue.take()); assertTrue(eventQueue2.isEmpty()); // Now publish some messages for the topic to be consumed by the // subscriber. publisher.asyncPublish(topic, Message.newBuilder().setBody(ByteString.copyFromUtf8("Message #1")).build(), new TestCallback(), null); assertTrue(queue.take()); assertTrue(consumeQueue2.take()); assertTrue(consumeQueue.isEmpty()); publisher2.asyncPublish(topic, Message.newBuilder().setBody(ByteString.copyFromUtf8("Message #2")).build(), new TestCallback(), null); assertTrue(queue.take()); assertTrue(consumeQueue2.take()); assertTrue(consumeQueue.isEmpty()); client2.close(); } @Test(timeout=60000) public void testSyncSubscribeWithListenerWhenReleasingTopic() throws Exception { client.close(); tearDownHubServer(); startHubServer(new RetentionServerConfiguration()); client = new HedwigClient(new ClientConfiguration() { @Override public HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return getDefaultHedwigAddress(); } @Override public boolean isSubscriptionChannelSharingEnabled() { return TestPubSubClient.this.isSubscriptionChannelSharingEnabled; } }); publisher = client.getPublisher(); subscriber = client.getSubscriber(); ByteString topic = ByteString.copyFromUtf8("mySyncSubscribeWithListenerWhenReleasingTopic"); ByteString subscriberId = ByteString.copyFromUtf8("mysub"); subscriber.addSubscriptionListener(new TestSubscriptionListener()); SubscriptionOptions options = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE_OR_ATTACH) .setForceAttach(false).setEnableResubscribe(false).build(); try { subscriber.subscribe(topic, subscriberId, options); } catch (PubSubException.ServiceDownException e) { fail("Should not reach here!"); } subscriber.startDelivery(topic, subscriberId, new TestMessageHandler()); publisher.asyncPublish(topic, Message.newBuilder().setBody(ByteString.copyFromUtf8("Message #1")).build(), new TestCallback(), null); assertTrue(queue.take()); assertTrue(consumeQueue.take()); Thread.sleep(RETENTION_SECS_VALUE * 2); assertEquals(SubscriptionEvent.TOPIC_MOVED, eventQueue.take()); } @Test public void testCloseSubscribeDuringResubscribe() throws Exception { client.close(); final long reconnectWaitTime = 2000L; client = new HedwigClient(new ClientConfiguration() { @Override public HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return getDefaultHedwigAddress(); } @Override public boolean isSubscriptionChannelSharingEnabled() { return TestPubSubClient.this.isSubscriptionChannelSharingEnabled; } @Override public long getSubscribeReconnectRetryWaitTime() { return reconnectWaitTime; } }); publisher = client.getPublisher(); subscriber = client.getSubscriber(); ByteString topic = ByteString.copyFromUtf8("testCloseSubscribeDuringResubscribe"); ByteString subscriberId = ByteString.copyFromUtf8("mysub"); subscriber.addSubscriptionListener(new TestSubscriptionListener()); SubscriptionOptions options = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE_OR_ATTACH) .setForceAttach(false).setEnableResubscribe(true).build(); subscriber.subscribe(topic, subscriberId, options); logger.info("Subscribed topic {}, subscriber {}.", topic.toStringUtf8(), subscriberId.toStringUtf8()); subscriber.startDelivery(topic, subscriberId, new TestMessageHandler()); // tear down the hub server to let subscribe enter tearDownHubServer(); logger.info("Tear down the hub server"); // wait for client enter to resubscribe logic Thread.sleep(reconnectWaitTime / 2); // close sub subscriber.closeSubscription(topic, subscriberId); // start the hub server again startHubServer(conf); // publish a new message publisher.asyncPublish(topic, Message.newBuilder().setBody(ByteString.copyFromUtf8("Message #1")).build(), new TestCallback(), null); assertTrue(queue.take()); // wait for another reconnect time period assertNull("Should not receive any messages since the subscription has already been closed.", consumeQueue.poll(reconnectWaitTime + reconnectWaitTime / 2, TimeUnit.MILLISECONDS)); } } TestSubAfterCloseSub.java000066400000000000000000000207261244507361200340760ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/client/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client; import java.io.IOException; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.server.HedwigHubTestBase; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.delivery.FIFODeliveryManager; import org.apache.hedwig.server.netty.PubSubServer; import org.apache.hedwig.util.Callback; import org.junit.Test; import com.google.protobuf.ByteString; public class TestSubAfterCloseSub extends HedwigHubTestBase { class TestClientConfiguration extends HubClientConfiguration { boolean isSubscriptionChannelSharingEnabled; TestClientConfiguration(boolean isSubscriptionChannelSharingEnabled) { this.isSubscriptionChannelSharingEnabled = isSubscriptionChannelSharingEnabled; } @Override public boolean isSubscriptionChannelSharingEnabled() { return isSubscriptionChannelSharingEnabled; } } private void sleepDeliveryManager(final CountDownLatch wakeupLatch) throws IOException { PubSubServer server = serversList.get(0); assertNotNull("There should be at least one pubsub server", server); DeliveryManager dm = server.getDeliveryManager(); assertNotNull("Delivery manager should not be null once server has started", dm); assertTrue("Delivery manager is wrong type", dm instanceof FIFODeliveryManager); final FIFODeliveryManager fdm = (FIFODeliveryManager)dm; Thread sleeper = new Thread() { @Override public void run() { try { fdm.suspendProcessing(); wakeupLatch.await(); fdm.resumeProcessing(); } catch (Exception e) { logger.error("Error suspending delivery manager", e); } } }; sleeper.start(); } /** * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-507} */ /* TODO: Add this test case back after BOOKKEEPER-37 is fixed @Test(timeout=15000) public void testSubAfterCloseSubForSimpleClient() throws Exception { runSubAfterCloseSubTest(false); } */ /** * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-507} */ @Test(timeout=15000) public void testSubAfterCloseSubForMultiplexClient() throws Exception { runSubAfterCloseSubTest(true); } private void runSubAfterCloseSubTest(boolean sharedSubscriptionChannel) throws Exception { HedwigClient client = new HedwigClient(new TestClientConfiguration(sharedSubscriptionChannel)); Publisher publisher = client.getPublisher(); final Subscriber subscriber = client.getSubscriber(); final ByteString topic = ByteString.copyFromUtf8("TestSubAfterCloseSub-" + sharedSubscriptionChannel); final ByteString subid = ByteString.copyFromUtf8("mysub"); final CountDownLatch wakeupLatch = new CountDownLatch(1); final CountDownLatch closeLatch = new CountDownLatch(1); final CountDownLatch subLatch = new CountDownLatch(1); final CountDownLatch deliverLatch = new CountDownLatch(1); try { subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); sleepDeliveryManager(wakeupLatch); subscriber.asyncCloseSubscription(topic, subid, new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { closeLatch.countDown(); } @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("Closesub failed : ", exception); } }, null); subscriber.asyncSubscribe(topic, subid, CreateOrAttach.ATTACH, new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { try { subscriber.startDelivery(topic, subid, new MessageHandler() { @Override public void deliver(ByteString topic, ByteString subid, Message msg, Callback callback, Object context) { deliverLatch.countDown(); } }); } catch (Exception cnse) { logger.error("Failed to start delivery : ", cnse); } subLatch.countDown(); } @Override public void operationFailed(Object ctx, PubSubException exception) { logger.error("Failed to subscriber : ", exception); } }, null); // Make the delivery manager thread sleep for a while. // Before {@link https://issues.apache.org/jira/browse/BOOKKEEPER-507}, // subscribe would succeed before closesub, while closesub would clear // a successful subscription w/o notifying the client. TimeUnit.SECONDS.sleep(2); // wake up fifo delivery thread wakeupLatch.countDown(); // wait close sub to succeed assertTrue("Async close subscription should succeed.", closeLatch.await(5, TimeUnit.SECONDS)); assertTrue("Subscribe should succeed.", subLatch.await(5, TimeUnit.SECONDS)); // publish a message publisher.publish(topic, Message.newBuilder().setBody(topic).build()); // wait for seconds to receive message assertTrue("Message should be received through successful subscription.", deliverLatch.await(5, TimeUnit.SECONDS)); } finally { client.close(); } } /** * Test that if we close a subscription and open again immediately, we don't * get a TOPIC_BUSY. This race existed because the simple client simply closed * the connection when closing a subscription, and another client could try to * attach to the subscription before the channel disconnected event occurs. * * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-513} */ @Test(timeout=15000) public void testSimpleClientDoesntGetTopicBusy() throws Exception { // run ten times to increase chance of hitting race for (int i = 0; i < 10; i++) { HedwigClient client1 = new HedwigClient(new TestClientConfiguration(false)); Subscriber subscriber1 = client1.getSubscriber(); HedwigClient client2 = new HedwigClient(new TestClientConfiguration(false)); Subscriber subscriber2 = client2.getSubscriber(); final ByteString topic = ByteString.copyFromUtf8("TestSimpleClientTopicBusy"); final ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber1.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber1.closeSubscription(topic, subid); subscriber2.subscribe(topic, subid, CreateOrAttach.ATTACH); subscriber2.closeSubscription(topic, subid); client1.close(); client2.close(); } } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/client/netty/000077500000000000000000000000001244507361200304335ustar00rootroot00000000000000TestMultiplexing.java000066400000000000000000000427351244507361200345530ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/client/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.client.netty; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.junit.After; import org.junit.Before; import org.junit.Test; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.server.HedwigHubTestBase; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.Callback; public class TestMultiplexing extends HedwigHubTestBase { private static final int DEFAULT_MSG_WINDOW_SIZE = 10; protected class TestServerConfiguration extends HubServerConfiguration { TestServerConfiguration(int serverPort, int sslServerPort) { super(serverPort, sslServerPort); } @Override public int getDefaultMessageWindowSize() { return DEFAULT_MSG_WINDOW_SIZE; } } class TestMessageHandler implements MessageHandler { int expected; final int numMsgsAtFirstRun; final int numMsgsAtSecondRun; final CountDownLatch firstLatch; final CountDownLatch secondLatch; final boolean receiveSecondRun; public TestMessageHandler(int start, int numMsgsAtFirstRun, boolean receiveSecondRun, int numMsgsAtSecondRun) { expected = start; this.numMsgsAtFirstRun = numMsgsAtFirstRun; this.numMsgsAtSecondRun = numMsgsAtSecondRun; this.receiveSecondRun = receiveSecondRun; firstLatch = new CountDownLatch(1); secondLatch = new CountDownLatch(1); } @Override public synchronized void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); logger.debug("Received message {}.", value); if (value == expected) { ++expected; } else { // error condition logger.error("Did not receive expected value, expected {}, got {}", expected, value); expected = 0; firstLatch.countDown(); secondLatch.countDown(); } if (numMsgsAtFirstRun + 1 == expected) { firstLatch.countDown(); } if (receiveSecondRun) { if (numMsgsAtFirstRun + numMsgsAtSecondRun + 1 == expected) { secondLatch.countDown(); } } else { if (numMsgsAtFirstRun + 1 < expected) { secondLatch.countDown(); } } callback.operationFinished(context, null); subscriber.consume(topic, subscriberId, msg.getMsgId()); } catch (Throwable t) { logger.error("Received bad message.", t); firstLatch.countDown(); secondLatch.countDown(); } } public void checkFirstRun() throws Exception { assertTrue("Timed out waiting for messages " + (numMsgsAtFirstRun + 1), firstLatch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected messages with " + (numMsgsAtFirstRun + 1), numMsgsAtFirstRun + 1, expected); } public void checkSecondRun() throws Exception { if (receiveSecondRun) { assertTrue("Timed out waiting for messages " + (numMsgsAtFirstRun + numMsgsAtSecondRun + 1), secondLatch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected messages with " + (numMsgsAtFirstRun + numMsgsAtSecondRun + 1), numMsgsAtFirstRun + numMsgsAtSecondRun + 1, expected); } else { assertFalse("Receive more messages than " + numMsgsAtFirstRun, secondLatch.await(3, TimeUnit.SECONDS)); assertEquals("Should be expected messages with ony " + (numMsgsAtFirstRun + 1), numMsgsAtFirstRun + 1, expected); } } } class ThrottleMessageHandler implements MessageHandler { int expected; final int numMsgs; final int numMsgsThrottle; final CountDownLatch throttleLatch; final CountDownLatch nonThrottleLatch; final boolean enableThrottle; public ThrottleMessageHandler(int start, int numMsgs, boolean enableThrottle, int numMsgsThrottle) { expected = start; this.numMsgs = numMsgs; this.enableThrottle = enableThrottle; this.numMsgsThrottle = numMsgsThrottle; throttleLatch = new CountDownLatch(1); nonThrottleLatch = new CountDownLatch(1); } @Override public synchronized void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); logger.debug("Received message {}.", value); if (value == expected) { ++expected; } else { // error condition logger.error("Did not receive expected value, expected {}, got {}", expected, value); expected = 0; throttleLatch.countDown(); nonThrottleLatch.countDown(); } if (expected == numMsgsThrottle + 2) { throttleLatch.countDown(); } if (expected == numMsgs + 1) { nonThrottleLatch.countDown(); } callback.operationFinished(context, null); if (enableThrottle) { if (expected > numMsgsThrottle + 1) { subscriber.consume(topic, subscriberId, msg.getMsgId()); } } else { subscriber.consume(topic, subscriberId, msg.getMsgId()); } } catch (Throwable t) { logger.error("Received bad message.", t); throttleLatch.countDown(); nonThrottleLatch.countDown(); } } public void checkThrottle() throws Exception { if (enableThrottle) { assertFalse("Received more messages than throttle value " + numMsgsThrottle, throttleLatch.await(3, TimeUnit.SECONDS)); assertEquals("Should be expected messages with only " + (numMsgsThrottle + 1), numMsgsThrottle + 1, expected); } else { assertTrue("Should not be throttled.", throttleLatch.await(10, TimeUnit.SECONDS)); assertTrue("Timed out waiting for messages " + (numMsgs + 1), nonThrottleLatch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected messages with " + (numMsgs + 1), numMsgs + 1, expected); } } public void checkAfterThrottle() throws Exception { if (enableThrottle) { assertTrue("Timed out waiting for messages " + (numMsgs + 1), nonThrottleLatch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected messages with " + (numMsgs + 1), numMsgs + 1, expected); } } } HedwigClient client; Publisher publisher; Subscriber subscriber; @Override @Before public void setUp() throws Exception { super.setUp(); client = new HedwigClient(new HubClientConfiguration() { @Override public boolean isSubscriptionChannelSharingEnabled() { return true; } @Override public boolean isAutoSendConsumeMessageEnabled() { return false; } }); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } @Override @After public void tearDown() throws Exception { client.close(); super.tearDown(); } @Override protected ServerConfiguration getServerConfiguration(int port, int sslPort) { return new TestServerConfiguration(port, sslPort); } @Test(timeout=60000) public void testStopDelivery() throws Exception { ByteString topic1 = ByteString.copyFromUtf8("testStopDelivery-1"); ByteString topic2 = ByteString.copyFromUtf8("testStopDelivery-2"); ByteString subid1 = ByteString.copyFromUtf8("mysubid-1"); ByteString subid2 = ByteString.copyFromUtf8("mysubid-2"); final int X = 20; TestMessageHandler csHandler11 = new TestMessageHandler(1, X, true, X); TestMessageHandler csHandler12 = new TestMessageHandler(1, X, false, 0); TestMessageHandler csHandler21 = new TestMessageHandler(1, X, false, 0); TestMessageHandler csHandler22 = new TestMessageHandler(1, X, true, X); subscriber.subscribe(topic1, subid1, CreateOrAttach.CREATE); subscriber.subscribe(topic1, subid2, CreateOrAttach.CREATE); subscriber.subscribe(topic2, subid1, CreateOrAttach.CREATE); subscriber.subscribe(topic2, subid2, CreateOrAttach.CREATE); // start deliveries subscriber.startDelivery(topic1, subid1, csHandler11); subscriber.startDelivery(topic1, subid2, csHandler12); subscriber.startDelivery(topic2, subid1, csHandler21); subscriber.startDelivery(topic2, subid2, csHandler22); // first publish for (int i = 1; i<=X; i++) { publisher.publish(topic1, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); publisher.publish(topic2, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); } csHandler11.checkFirstRun(); csHandler12.checkFirstRun(); csHandler21.checkFirstRun(); csHandler22.checkFirstRun(); // stop delivery for and subscriber.stopDelivery(topic1, subid2); subscriber.stopDelivery(topic2, subid1); // second publish for (int i = X+1; i<=2*X; i++) { publisher.publish(topic1, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); publisher.publish(topic2, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); } csHandler11.checkSecondRun(); csHandler22.checkSecondRun(); csHandler12.checkSecondRun(); csHandler21.checkSecondRun(); } @Test(timeout=60000) public void testCloseSubscription() throws Exception { ByteString topic1 = ByteString.copyFromUtf8("testCloseSubscription-1"); ByteString topic2 = ByteString.copyFromUtf8("testCloseSubscription-2"); ByteString subid1 = ByteString.copyFromUtf8("mysubid-1"); ByteString subid2 = ByteString.copyFromUtf8("mysubid-2"); final int X = 20; TestMessageHandler csHandler11 = new TestMessageHandler(1, X, true, X); TestMessageHandler csHandler12 = new TestMessageHandler(1, X, false, 0); TestMessageHandler csHandler21 = new TestMessageHandler(1, X, false, 0); TestMessageHandler csHandler22 = new TestMessageHandler(1, X, true, X); subscriber.subscribe(topic1, subid1, CreateOrAttach.CREATE); subscriber.subscribe(topic1, subid2, CreateOrAttach.CREATE); subscriber.subscribe(topic2, subid1, CreateOrAttach.CREATE); subscriber.subscribe(topic2, subid2, CreateOrAttach.CREATE); // start deliveries subscriber.startDelivery(topic1, subid1, csHandler11); subscriber.startDelivery(topic1, subid2, csHandler12); subscriber.startDelivery(topic2, subid1, csHandler21); subscriber.startDelivery(topic2, subid2, csHandler22); // first publish for (int i = 1; i<=X; i++) { publisher.publish(topic1, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); publisher.publish(topic2, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); } csHandler11.checkFirstRun(); csHandler12.checkFirstRun(); csHandler21.checkFirstRun(); csHandler22.checkFirstRun(); // close subscription for and subscriber.closeSubscription(topic1, subid2); subscriber.closeSubscription(topic2, subid1); // second publish for (int i = X+1; i<=2*X; i++) { publisher.publish(topic1, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); publisher.publish(topic2, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); } csHandler11.checkSecondRun(); csHandler22.checkSecondRun(); csHandler12.checkSecondRun(); csHandler21.checkSecondRun(); } @Test(timeout=60000) public void testThrottle() throws Exception { ByteString topic1 = ByteString.copyFromUtf8("testThrottle-1"); ByteString topic2 = ByteString.copyFromUtf8("testThrottle-2"); ByteString subid1 = ByteString.copyFromUtf8("mysubid-1"); ByteString subid2 = ByteString.copyFromUtf8("mysubid-2"); final int X = DEFAULT_MSG_WINDOW_SIZE; ThrottleMessageHandler csHandler11 = new ThrottleMessageHandler(1, 3*X, false, X); ThrottleMessageHandler csHandler12 = new ThrottleMessageHandler(1, 3*X, true, X); ThrottleMessageHandler csHandler21 = new ThrottleMessageHandler(1, 3*X, true, X); ThrottleMessageHandler csHandler22 = new ThrottleMessageHandler(1, 3*X, false, X); subscriber.subscribe(topic1, subid1, CreateOrAttach.CREATE); subscriber.subscribe(topic1, subid2, CreateOrAttach.CREATE); subscriber.subscribe(topic2, subid1, CreateOrAttach.CREATE); subscriber.subscribe(topic2, subid2, CreateOrAttach.CREATE); // start deliveries subscriber.startDelivery(topic1, subid1, csHandler11); subscriber.startDelivery(topic1, subid2, csHandler12); subscriber.startDelivery(topic2, subid1, csHandler21); subscriber.startDelivery(topic2, subid2, csHandler22); // publish for (int i = 1; i<=3*X; i++) { publisher.publish(topic1, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); publisher.publish(topic2, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); } csHandler11.checkThrottle(); csHandler12.checkThrottle(); csHandler21.checkThrottle(); csHandler22.checkThrottle(); // consume messages to not throttle them for (int i=1; i<=X; i++) { MessageSeqId seqId = MessageSeqId.newBuilder().setLocalComponent(i).build(); subscriber.consume(topic1, subid2, seqId); subscriber.consume(topic2, subid1, seqId); } csHandler11.checkAfterThrottle(); csHandler22.checkAfterThrottle(); csHandler12.checkAfterThrottle(); csHandler21.checkAfterThrottle(); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/000077500000000000000000000000001244507361200273205ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/HedwigHubTestBase.java000066400000000000000000000131271244507361200334700ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server; import java.util.LinkedList; import java.util.List; import junit.framework.TestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.After; import org.junit.Before; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.netty.PubSubServer; import org.apache.hedwig.server.persistence.BookKeeperTestBase; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.bookkeeper.test.PortManager; /** * This is a base class for any tests that need a Hedwig Hub(s) setup with an * associated BookKeeper and ZooKeeper instance. * */ public abstract class HedwigHubTestBase extends TestCase { protected static Logger logger = LoggerFactory.getLogger(HedwigHubTestBase.class); // BookKeeper variables // Default number of bookie servers to setup. Extending classes can // override this. protected int numBookies = 3; protected long readDelay = 0L; protected BookKeeperTestBase bktb; // PubSubServer variables // Default number of PubSubServer hubs to setup. Extending classes can // override this. protected final int numServers; protected List serversList; protected List serverAddresses; public HedwigHubTestBase() { this(1); } protected HedwigHubTestBase(int numServers) { this.numServers = numServers; serverAddresses = new LinkedList(); for (int i = 0; i < numServers; i++) { serverAddresses.add(new HedwigSocketAddress("localhost", PortManager.nextFreePort(), PortManager.nextFreePort())); } } // Default child class of the ServerConfiguration to be used here. // Extending classes can define their own (possibly extending from this) and // override the getServerConfiguration method below to return their own // configuration. protected class HubServerConfiguration extends ServerConfiguration { private final int serverPort, sslServerPort; public HubServerConfiguration(int serverPort, int sslServerPort) { this.serverPort = serverPort; this.sslServerPort = sslServerPort; } @Override public int getServerPort() { return serverPort; } @Override public int getSSLServerPort() { return sslServerPort; } @Override public String getZkHost() { return bktb.getZkHostPort(); } @Override public boolean isSSLEnabled() { return true; } @Override public String getCertName() { return "/server.p12"; } @Override public String getPassword() { return "eUySvp2phM2Wk"; } } public class HubClientConfiguration extends ClientConfiguration { @Override public HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return serverAddresses.get(0); } } // Method to get a ServerConfiguration for the PubSubServers created using // the specified ports. Extending child classes can override this. This // default implementation will return the HubServerConfiguration object // defined above. protected ServerConfiguration getServerConfiguration(int serverPort, int sslServerPort) { return new HubServerConfiguration(serverPort, sslServerPort); } protected void startHubServers() throws Exception { // Now create the PubSubServer Hubs serversList = new LinkedList(); for (int i = 0; i < numServers; i++) { ServerConfiguration conf = getServerConfiguration(serverAddresses.get(i).getPort(), serverAddresses.get(i).getSSLPort()); PubSubServer s = new PubSubServer(conf, new ClientConfiguration(), new LoggingExceptionHandler()); serversList.add(s); s.start(); } } protected void stopHubServers() throws Exception { // Shutdown all of the PubSubServers for (PubSubServer server : serversList) { server.shutdown(); } serversList.clear(); } @Override @Before public void setUp() throws Exception { logger.info("STARTING " + getName()); bktb = new BookKeeperTestBase(numBookies, readDelay); bktb.setUp(); startHubServers(); logger.info("HedwigHub test setup finished"); } @Override @After public void tearDown() throws Exception { logger.info("tearDown starting"); stopHubServers(); bktb.tearDown(); logger.info("FINISHED " + getName()); } } HedwigRegionTestBase.java000066400000000000000000000260441244507361200341200ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server; import java.util.HashMap; import java.util.LinkedList; import java.util.List; import java.util.Map; import junit.framework.TestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.After; import org.junit.Before; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.netty.PubSubServer; import org.apache.hedwig.server.persistence.BookKeeperTestBase; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.hedwig.server.LoggingExceptionHandler; import org.apache.bookkeeper.test.PortManager; /** * This is a base class for any tests that need a Hedwig Region(s) setup with a * number of Hedwig hubs per region, an associated HedwigClient per region and * the required BookKeeper and ZooKeeper instances. * */ public abstract class HedwigRegionTestBase extends TestCase { protected static Logger logger = LoggerFactory.getLogger(HedwigRegionTestBase.class); // BookKeeper variables // Default number of bookie servers to setup. Extending classes // can override this. We should be able to reuse the same BookKeeper // ensemble among all of the regions, at least for unit testing purposes. protected int numBookies = 3; protected BookKeeperTestBase bktb; // Hedwig Region variables // Default number of Hedwig Regions to setup. Extending classes can // override this. protected int numRegions = 2; protected int numServersPerRegion = 1; // Map with keys being Region names and values being the list of Hedwig // Hubs (PubSubServers) for that particular region. protected Map> regionServersMap; // Map with keys being Region names and values being the Hedwig Client // instance. protected Map regionClientsMap; protected Map regionNameToIndexMap; protected Map> regionHubAddresses; // String constant used as the prefix for the region names. protected static final String REGION_PREFIX = "region"; // Default child class of the ServerConfiguration to be used here. // Extending classes can define their own (possibly extending from this) and // override the getServerConfiguration method below to return their own // configuration. protected class RegionServerConfiguration extends ServerConfiguration { private final int serverPort, sslServerPort; private final String regionName; public RegionServerConfiguration(int serverPort, int sslServerPort, String regionName) { this.serverPort = serverPort; this.sslServerPort = sslServerPort; this.regionName = regionName; conf.setProperty(REGION, regionName); setRegionList(); } protected void setRegionList() { List myRegionList = new LinkedList(); for (int i = 0; i < numRegions; i++) { int curDefaultServerPort = regionHubAddresses.get(i).get(0).getPort(); int curDefaultSSLServerPort = regionHubAddresses.get(i).get(0).getSSLPort(); // Add this region default server port if it is for a region // other than its own. if (regionNameToIndexMap.get(regionName) != i) { myRegionList.add("localhost:" + curDefaultServerPort + ":" + curDefaultSSLServerPort); } } regionList = myRegionList; } @Override public int getServerPort() { return serverPort; } @Override public int getSSLServerPort() { return sslServerPort; } @Override public String getZkHost() { return bktb.getZkHostPort(); } @Override public String getMyRegion() { return regionName; } @Override public boolean isSSLEnabled() { return true; } @Override public boolean isInterRegionSSLEnabled() { return true; } @Override public String getCertName() { return "/server.p12"; } @Override public String getPassword() { return "eUySvp2phM2Wk"; } } // Method to get a ServerConfiguration for the PubSubServers created using // the specified ports and region name. Extending child classes can override // this. This default implementation will return the // RegionServerConfiguration object defined above. protected ServerConfiguration getServerConfiguration(int serverPort, int sslServerPort, String regionName) { return new RegionServerConfiguration(serverPort, sslServerPort, regionName); } // Default ClientConfiguration to use. This just points to the first // Hedwig hub server in each region as the "default server host" to connect // to. protected class RegionClientConfiguration extends ClientConfiguration { public RegionClientConfiguration(int serverPort, int sslServerPort) { myDefaultServerAddress = new HedwigSocketAddress("localhost:" + serverPort + ":" + sslServerPort); } // Below you can override any of the default ClientConfiguration // parameters if needed. } // Method to get a ClientConfiguration for the HedwigClients created. // Inputs are the default Hedwig hub server's ports to point to. protected ClientConfiguration getClientConfiguration(int serverPort, int sslServerPort) { return new RegionClientConfiguration(serverPort, sslServerPort); } // Method to get a ClientConfiguration for the Cross Region Hedwig Client. protected ClientConfiguration getRegionClientConfiguration() { return new ClientConfiguration() { @Override public HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return regionHubAddresses.get(0).get(0); } }; } @Override @Before public void setUp() throws Exception { logger.info("STARTING " + getName()); bktb = new BookKeeperTestBase(numBookies); bktb.setUp(); // Create the Hedwig PubSubServer Hubs for all of the regions regionServersMap = new HashMap>(numRegions, 1.0f); regionClientsMap = new HashMap(numRegions, 1.0f); regionHubAddresses = new HashMap>(numRegions, 1.0f); for (int i = 0; i < numRegions; i++) { List addresses = new LinkedList(); for (int j = 0; j < numServersPerRegion; j++) { HedwigSocketAddress a = new HedwigSocketAddress("localhost", PortManager.nextFreePort(), PortManager.nextFreePort()); addresses.add(a); } regionHubAddresses.put(i, addresses); } regionNameToIndexMap = new HashMap(); for (int i = 0; i < numRegions; i++) { startRegion(i); } logger.info("HedwigRegion test setup finished"); } @Override @After public void tearDown() throws Exception { logger.info("tearDown starting"); // Stop all of the HedwigClients for all regions for (HedwigClient client : regionClientsMap.values()) { client.close(); } regionClientsMap.clear(); // Shutdown all of the PubSubServers in all regions for (List serversList : regionServersMap.values()) { for (PubSubServer server : serversList) { server.shutdown(); } } logger.info("Finished shutting down all of the hub servers!"); regionServersMap.clear(); // Shutdown the BookKeeper and ZooKeeper stuff bktb.tearDown(); logger.info("FINISHED " + getName()); } protected void stopRegion(int regionIdx) throws Exception { String regionName = REGION_PREFIX + regionIdx; if (logger.isDebugEnabled()) { logger.debug("Stop region : " + regionName); } HedwigClient regionClient = regionClientsMap.remove(regionName); if (null != regionClient) { regionClient.close(); } List serversList = regionServersMap.remove(regionName); if (null == serversList) { return; } for (PubSubServer server : serversList) { server.shutdown(); } logger.info("Finished shutting down all of the hub servers in region " + regionName); } protected void startRegion(int i) throws Exception { String regionName = REGION_PREFIX + i; regionNameToIndexMap.put(regionName, i); if (logger.isDebugEnabled()) { logger.debug("Start region : " + regionName); } List serversList = new LinkedList(); // For the current region, create the necessary amount of hub // servers. We will basically increment through the port numbers // starting from the initial ones defined. for (int j = 0; j < numServersPerRegion; j++) { HedwigSocketAddress a = regionHubAddresses.get(i).get(j); PubSubServer s = new PubSubServer( getServerConfiguration(a.getPort(), a.getSSLPort(), regionName), getRegionClientConfiguration(), new LoggingExceptionHandler()); serversList.add(s); s.start(); } // Store this list of servers created for the current region regionServersMap.put(regionName, serversList); // Create a Hedwig Client that points to the first Hub server // created in the loop above for the current region. HedwigClient regionClient = new HedwigClient( getClientConfiguration(regionHubAddresses.get(i).get(0).getPort(), regionHubAddresses.get(i).get(0).getSSLPort())); regionClientsMap.put(regionName, regionClient); } } LoggingExceptionHandler.java000066400000000000000000000025261244507361200346540ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Exception handler that simply logs the exception and * does nothing more. To be used in tests instead of TerminateJVMExceptionHandler */ public class LoggingExceptionHandler implements Thread.UncaughtExceptionHandler { static Logger logger = LoggerFactory.getLogger(LoggingExceptionHandler.class); @Override public void uncaughtException(Thread t, Throwable e) { logger.error("Uncaught exception in thread " + t.getName(), e); } } PubSubServerStandAloneTestBase.java000066400000000000000000000064051244507361200361040ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server; import junit.framework.TestCase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.junit.After; import org.junit.Before; import org.apache.bookkeeper.test.PortManager; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.server.LoggingExceptionHandler; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.netty.PubSubServer; import org.apache.hedwig.util.HedwigSocketAddress; /** * This is a base class for any tests that need a StandAlone PubSubServer setup. */ public abstract class PubSubServerStandAloneTestBase extends TestCase { protected static Logger logger = LoggerFactory.getLogger(PubSubServerStandAloneTestBase.class); protected class StandAloneServerConfiguration extends ServerConfiguration { final int port = PortManager.nextFreePort(); final int sslPort = PortManager.nextFreePort(); @Override public boolean isStandalone() { return true; } @Override public int getServerPort() { return port; } @Override public int getSSLServerPort() { return sslPort; } } public ServerConfiguration getStandAloneServerConfiguration() { return new StandAloneServerConfiguration(); } protected PubSubServer server; protected ServerConfiguration conf; protected HedwigSocketAddress defaultAddress; @Override @Before public void setUp() throws Exception { logger.info("STARTING " + getName()); conf = getStandAloneServerConfiguration(); startHubServer(conf); logger.info("Standalone PubSubServer test setup finished"); } @Override @After public void tearDown() throws Exception { logger.info("tearDown starting"); tearDownHubServer(); logger.info("FINISHED " + getName()); } protected HedwigSocketAddress getDefaultHedwigAddress() { return defaultAddress; } protected void startHubServer(ServerConfiguration conf) throws Exception { defaultAddress = new HedwigSocketAddress("localhost", conf.getServerPort(), conf.getSSLServerPort()); server = new PubSubServer(conf, new ClientConfiguration(), new LoggingExceptionHandler()); server.start(); } protected void tearDownHubServer() throws Exception { server.shutdown(); } } TestBackwardCompat.java000066400000000000000000001401661244507361200336360ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server; import java.net.InetAddress; import java.io.File; import java.io.IOException; import java.util.LinkedList; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import com.google.protobuf.ByteString; import junit.framework.TestCase; import org.junit.Test; import static org.junit.Assert.*; import org.apache.bookkeeper.test.ZooKeeperUtil; import org.apache.bookkeeper.test.PortManager; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * Test Backward Compatability between different versions */ public class TestBackwardCompat extends TestCase { private static Logger logger = LoggerFactory.getLogger(TestBackwardCompat.class); static final int CONSUMEINTERVAL = 5; static ZooKeeperUtil zkUtil = new ZooKeeperUtil(); static class BookKeeperCluster400 { int numBookies; List bkConfs; List bks; BookKeeperCluster400(int numBookies) { this.numBookies = numBookies; } public void start() throws Exception { zkUtil.startServer(); bks = new LinkedList(); bkConfs = new LinkedList(); for (int i=0; i bkConfs; List bks; BookKeeperCluster410(int numBookies) { this.numBookies = numBookies; } public void start() throws Exception { zkUtil.startServer(); bks = new LinkedList(); bkConfs = new LinkedList(); for (int i=0; i callback, Object context) { if (!t.equals(topic) || !s.equals(subId)) { return; } int num = Integer.parseInt(msg.getBody().toStringUtf8()); if (num == next) { latch.countDown(); ++next; } callback.operationFinished(context, null); } public boolean await(long timeout, TimeUnit unit) throws InterruptedException { return latch.await(timeout, unit); } } Client410(final String connectString) { conf = new org.apache.hw_v4_1_0.hedwig.client.conf.ClientConfiguration() { @Override public boolean isAutoSendConsumeMessageEnabled() { return true; } @Override public int getConsumedMessagesBufferSize() { return 1; } @Override protected org.apache.hw_v4_1_0.hedwig.util.HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return new org.apache.hw_v4_1_0.hedwig.util.HedwigSocketAddress(connectString); } }; client = new org.apache.hw_v4_1_0.hedwig.client.HedwigClient(conf); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } void close() throws Exception { if (null != client) { client.close(); } } org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.MessageSeqId publish( ByteString topic, ByteString data) throws Exception { org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.Message message = org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.Message.newBuilder() .setBody(data).build(); publisher.publish(topic, message); return null; } void publishInts(ByteString topic, int start, int num) throws Exception { for (int i=0; i callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); if (value == expected.get()) { expected.incrementAndGet(); } else { logger.error("Did not receive expected value, expected {}, got {}", expected.get(), value); expected.set(0); latch.countDown(); } if (expected.get() == x) { latch.countDown(); } callback.operationFinished(context, null); } catch (Exception e) { logger.error("Received bad message", e); latch.countDown(); } } }); assertTrue("Timed out waiting for messages Y is " + y + " expected is currently " + expected.get(), latch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected message with " + x, x, expected.get()); subscriber.stopDelivery(topic, subid); subscriber.closeSubscription(topic, subid); Thread.sleep(1000); // give server time to run disconnect logic (BOOKKEEPER-513) } void subscribe(ByteString topic, ByteString subscriberId) throws Exception { org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscriptionOptions options = org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscriptionOptions.newBuilder() .setCreateOrAttach(org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach.CREATE_OR_ATTACH).build(); subscribe(topic, subscriberId, options); } void subscribe(ByteString topic, ByteString subscriberId, org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscriptionOptions options) throws Exception { subscriber.subscribe(topic, subscriberId, options); } void closeSubscription(ByteString topic, ByteString subscriberId) throws Exception { subscriber.closeSubscription(topic, subscriberId); Thread.sleep(1000); // give server time to run disconnect logic (BOOKKEEPER-513) } void receiveInts(ByteString topic, ByteString subscriberId, int start, int num) throws Exception { IntMessageHandler msgHandler = new IntMessageHandler(topic, subscriberId, start, num); subscriber.startDelivery(topic, subscriberId, msgHandler); msgHandler.await(num, TimeUnit.SECONDS); subscriber.stopDelivery(topic, subscriberId); } } /** * Current Version */ static class BookKeeperClusterCurrent { int numBookies; List bkConfs; List bks; BookKeeperClusterCurrent(int numBookies) { this.numBookies = numBookies; } public void start() throws Exception { zkUtil.startServer(); bks = new LinkedList(); bkConfs = new LinkedList(); for (int i=0; i callback, Object context) { if (!t.equals(topic) || !s.equals(subId)) { return; } int num = Integer.parseInt(msg.getBody().toStringUtf8()); if (num == next) { latch.countDown(); ++next; } callback.operationFinished(context, null); } public boolean await(long timeout, TimeUnit unit) throws InterruptedException { return latch.await(timeout, unit); } } ClientCurrent(final String connectString) { this(true, connectString); } ClientCurrent(final boolean autoConsumeEnabled, final String connectString) { conf = new org.apache.hedwig.client.conf.ClientConfiguration() { @Override public boolean isAutoSendConsumeMessageEnabled() { return autoConsumeEnabled; } @Override public int getConsumedMessagesBufferSize() { return 1; } @Override protected HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return new HedwigSocketAddress(connectString); } }; client = new org.apache.hedwig.client.HedwigClient(conf); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } void close() throws Exception { if (null != client) { client.close(); } } org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId publish( ByteString topic, ByteString data) throws Exception { org.apache.hedwig.protocol.PubSubProtocol.Message message = org.apache.hedwig.protocol.PubSubProtocol.Message.newBuilder() .setBody(data).build(); org.apache.hedwig.protocol.PubSubProtocol.PublishResponse resp = publisher.publish(topic, message); if (null == resp) { return null; } return resp.getPublishedMsgId(); } void publishInts(ByteString topic, int start, int num) throws Exception { for (int i=0; i callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); if (value == expected.get()) { expected.incrementAndGet(); } else { logger.error("Did not receive expected value, expected {}, got {}", expected.get(), value); expected.set(0); latch.countDown(); } if (expected.get() == x) { latch.countDown(); } callback.operationFinished(context, null); } catch (Exception e) { logger.error("Received bad message", e); latch.countDown(); } } }); assertTrue("Timed out waiting for messages Y is " + y + " expected is currently " + expected.get(), latch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected message with " + x, x, expected.get()); subscriber.stopDelivery(topic, subid); subscriber.closeSubscription(topic, subid); } void receiveNumModM(final ByteString topic, final ByteString subid, final int start, final int num, final int M) throws Exception { org.apache.hedwig.filter.ServerMessageFilter filter = new org.apache.hedwig.filter.ServerMessageFilter() { @Override public org.apache.hedwig.filter.ServerMessageFilter initialize(Configuration conf) { // do nothing return this; } @Override public void uninitialize() { // do nothing; } @Override public org.apache.hedwig.filter.MessageFilterBase setSubscriptionPreferences(ByteString topic, ByteString subscriberId, org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences preferences) { // do nothing; return this; } @Override public boolean testMessage(org.apache.hedwig.protocol.PubSubProtocol.Message msg) { int value = Integer.valueOf(msg.getBody().toStringUtf8()); return 0 == value % M; } }; filter.initialize(conf.getConf()); subscriber.subscribe(topic, subid, org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach.ATTACH); final int base = start + M - start % M; final AtomicInteger expected = new AtomicInteger(base); final CountDownLatch latch = new CountDownLatch(1); subscriber.startDeliveryWithFilter(topic, subid, new org.apache.hedwig.client.api.MessageHandler() { synchronized public void deliver(ByteString topic, ByteString subscriberId, org.apache.hedwig.protocol.PubSubProtocol.Message msg, org.apache.hedwig.util.Callback callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); // duplicated messages received, ignore them if (value > start) { if (value == expected.get()) { expected.addAndGet(M); } else { logger.error("Did not receive expected value, expected {}, got {}", expected.get(), value); expected.set(0); latch.countDown(); } if (expected.get() == (base + num * M)) { latch.countDown(); } } callback.operationFinished(context, null); } catch (Exception e) { logger.error("Received bad message", e); latch.countDown(); } } }, (org.apache.hedwig.filter.ClientMessageFilter) filter); assertTrue("Timed out waiting for messages mod " + M + " expected is " + expected.get(), latch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected message with " + (base + num * M), (base + num*M), expected.get()); subscriber.stopDelivery(topic, subid); filter.uninitialize(); subscriber.closeSubscription(topic, subid); } void subscribe(ByteString topic, ByteString subscriberId) throws Exception { org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions options = org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions.newBuilder() .setCreateOrAttach(org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach.CREATE_OR_ATTACH).build(); subscribe(topic, subscriberId, options); } void subscribe(ByteString topic, ByteString subscriberId, org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions options) throws Exception { subscriber.subscribe(topic, subscriberId, options); } void closeSubscription(ByteString topic, ByteString subscriberId) throws Exception { subscriber.closeSubscription(topic, subscriberId); } void receiveInts(ByteString topic, ByteString subscriberId, int start, int num) throws Exception { IntMessageHandler msgHandler = new IntMessageHandler(topic, subscriberId, start, num); subscriber.startDelivery(topic, subscriberId, msgHandler); msgHandler.await(num, TimeUnit.SECONDS); subscriber.stopDelivery(topic, subscriberId); } // throttle doesn't work talking with 41 server void throttleX41(ByteString topic, ByteString subid, final int X) throws Exception { org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions options = org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions.newBuilder() .setCreateOrAttach(org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach.CREATE_OR_ATTACH) .setMessageWindowSize(X) .build(); subscribe(topic, subid, options); closeSubscription(topic, subid); publishInts(topic, 1, 3*X); subscribe(topic, subid); final AtomicInteger expected = new AtomicInteger(1); final CountDownLatch throttleLatch = new CountDownLatch(1); final CountDownLatch nonThrottleLatch = new CountDownLatch(1); subscriber.startDelivery(topic, subid, new org.apache.hedwig.client.api.MessageHandler() { @Override public synchronized void deliver(ByteString topic, ByteString subscriberId, org.apache.hedwig.protocol.PubSubProtocol.Message msg, org.apache.hedwig.util.Callback callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); logger.debug("Received message {},", value); if (value == expected.get()) { expected.incrementAndGet(); } else { // error condition logger.error("Did not receive expected value, expected {}, got {}", expected.get(), value); expected.set(0); throttleLatch.countDown(); nonThrottleLatch.countDown(); } if (expected.get() > X+1) { throttleLatch.countDown(); } if (expected.get() == (3 * X + 1)) { nonThrottleLatch.countDown(); } callback.operationFinished(context, null); } catch (Exception e) { logger.error("Received bad message", e); throttleLatch.countDown(); nonThrottleLatch.countDown(); } } }); assertTrue("Should Receive more messages than throttle value " + X, throttleLatch.await(10, TimeUnit.SECONDS)); assertTrue("Timed out waiting for messages " + (3*X + 1), nonThrottleLatch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected message with " + (3*X + 1), 3*X + 1, expected.get()); subscriber.stopDelivery(topic, subid); closeSubscription(topic, subid); } } /** * Test compatability of message bound between version 4.0.0 and * current version. * * 1) message bound doesn't take effects on 4.0.0 server. * 2) message bound take effects on both 4.1.0 and current server */ @Test(timeout=60000) public void testMessageBoundCompat() throws Exception { ByteString topic = ByteString.copyFromUtf8("testMessageBoundCompat"); ByteString subid = ByteString.copyFromUtf8("mysub"); int port = PortManager.nextFreePort(); int sslPort = PortManager.nextFreePort(); // start bookkeeper 400 BookKeeperCluster400 bkc400 = new BookKeeperCluster400(3); bkc400.start(); // start 400 server Server400 s400 = new Server400(zkUtil.getZooKeeperConnectString(), port, sslPort); s400.start(); org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions options5cur = org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions.newBuilder() .setCreateOrAttach(org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach.CREATE_OR_ATTACH) .setMessageBound(5).build(); ClientCurrent ccur = new ClientCurrent("localhost:" + port + ":" + sslPort); ccur.subscribe(topic, subid, options5cur); ccur.closeSubscription(topic, subid); ccur.sendXExpectLastY(topic, subid, 50, 50); // stop 400 servers s400.stop(); bkc400.stop(); // start bookkeeper 410 BookKeeperCluster410 bkc410 = new BookKeeperCluster410(3); bkc410.start(); // start 410 server Server410 s410 = new Server410(zkUtil.getZooKeeperConnectString(), port, sslPort); s410.start(); ccur.subscribe(topic, subid, options5cur); ccur.closeSubscription(topic, subid); ccur.sendXExpectLastY(topic, subid, 50, 5); // stop 410 servers s410.stop(); bkc410.stop(); // start bookkeeper current BookKeeperClusterCurrent bkccur = new BookKeeperClusterCurrent(3); bkccur.start(); // start current server ServerCurrent scur = new ServerCurrent(zkUtil.getZooKeeperConnectString(), port, sslPort); scur.start(); ccur.subscribe(topic, subid, options5cur); ccur.closeSubscription(topic, subid); ccur.sendXExpectLastY(topic, subid, 50, 5); // stop current servers scur.stop(); bkccur.stop(); ccur.close(); } /** * Test compatability of publish interface between version 4.1.0 * and current verison. * * 1) 4.1.0 client could talk with current server. * 2) current client could talk with 4.1.0 server, * but no message seq id would be returned */ @Test(timeout=60000) public void testPublishCompat410() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestPublishCompat410"); ByteString data = ByteString.copyFromUtf8("testdata"); // start bookkeeper 410 BookKeeperCluster410 bkc410 = new BookKeeperCluster410(3); bkc410.start(); int port = PortManager.nextFreePort(); int sslPort = PortManager.nextFreePort(); // start 410 server Server410 s410 = new Server410(zkUtil.getZooKeeperConnectString(), port, sslPort); s410.start(); ClientCurrent ccur = new ClientCurrent("localhost:"+port+":"+sslPort); Client410 c410 = new Client410("localhost:"+port+":"+sslPort); // client c410 could publish message to 410 server assertNull(c410.publish(topic, data)); // client ccur could publish message to 410 server // but no message seq id would be returned assertNull(ccur.publish(topic, data)); // stop 410 server s410.stop(); // start current server ServerCurrent scur = new ServerCurrent(zkUtil.getZooKeeperConnectString(), port, sslPort); scur.start(); // client c410 could publish message to 410 server // but no message seq id would be returned assertNull(c410.publish(topic, data)); // client ccur could publish message to current server assertNotNull(ccur.publish(topic, data)); ccur.close(); c410.close(); // stop current server scur.stop(); bkc410.stop(); } /** * Test compatability between version 4.1.0 and the current version. * * A current server could read subscription data recorded by 4.1.0 server. */ @Test(timeout=60000) public void testSubscriptionDataCompat410() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestCompat410"); ByteString sub410 = ByteString.copyFromUtf8("sub410"); ByteString subcur = ByteString.copyFromUtf8("subcur"); // start bookkeeper 410 BookKeeperCluster410 bkc410 = new BookKeeperCluster410(3); bkc410.start(); int port = PortManager.nextFreePort(); int sslPort = PortManager.nextFreePort(); // start 410 server Server410 s410 = new Server410(zkUtil.getZooKeeperConnectString(), port, sslPort); s410.start(); Client410 c410 = new Client410("localhost:"+port+":"+sslPort); c410.subscribe(topic, sub410); c410.closeSubscription(topic, sub410); Thread.sleep(1000); // give server time to run disconnect logic (BOOKKEEPER-513) ClientCurrent ccur = new ClientCurrent("localhost:"+port+":"+sslPort); ccur.subscribe(topic, subcur); ccur.closeSubscription(topic, subcur); // publish messages using old client c410.publishInts(topic, 0, 10); // stop 410 server s410.stop(); // start current server ServerCurrent scur = new ServerCurrent(zkUtil.getZooKeeperConnectString(), port, sslPort); scur.start(); c410.subscribe(topic, sub410); c410.receiveInts(topic, sub410, 0, 10); ccur.subscribe(topic, subcur); ccur.receiveInts(topic, subcur, 0, 10); // publish messages using current client ccur.publishInts(topic, 10, 10); c410.receiveInts(topic, sub410, 10, 10); ccur.receiveInts(topic, subcur, 10, 10); // stop current server scur.stop(); c410.close(); ccur.close(); // stop bookkeeper cluster bkc410.stop(); } /** * Test compatability between version 4.1.0 and the current version. * * A 4.1.0 client could not update message bound, while current could do it. */ @Test(timeout=60000) public void testUpdateMessageBoundCompat410() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestUpdateMessageBoundCompat410"); ByteString subid = ByteString.copyFromUtf8("mysub"); // start bookkeeper BookKeeperClusterCurrent bkccur= new BookKeeperClusterCurrent(3); bkccur.start(); int port = PortManager.nextFreePort(); int sslPort = PortManager.nextFreePort(); // start hub server ServerCurrent scur = new ServerCurrent(zkUtil.getZooKeeperConnectString(), port, sslPort); scur.start(); org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions options5cur = org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions.newBuilder() .setCreateOrAttach(org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach.CREATE_OR_ATTACH) .setMessageBound(5).build(); org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscriptionOptions options5v410 = org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscriptionOptions.newBuilder() .setCreateOrAttach(org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach.CREATE_OR_ATTACH) .setMessageBound(5).build(); org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscriptionOptions options20v410 = org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscriptionOptions.newBuilder() .setCreateOrAttach(org.apache.hw_v4_1_0.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach.CREATE_OR_ATTACH) .setMessageBound(20).build(); Client410 c410 = new Client410("localhost:"+port+":"+sslPort); c410.subscribe(topic, subid, options20v410); c410.closeSubscription(topic, subid); Thread.sleep(1000); // give server time to run disconnect logic (BOOKKEEPER-513) c410.sendXExpectLastY(topic, subid, 50, 20); c410.subscribe(topic, subid, options5v410); c410.closeSubscription(topic, subid); Thread.sleep(1000); // give server time to run disconnect logic (BOOKKEEPER-513) // the message bound isn't updated. c410.sendXExpectLastY(topic, subid, 50, 20); ClientCurrent ccur = new ClientCurrent("localhost:"+port+":"+sslPort); ccur.subscribe(topic, subid, options5cur); ccur.closeSubscription(topic, subid); Thread.sleep(1000); // give server time to run disconnect logic (BOOKKEEPER-513) // the message bound should be updated. c410.sendXExpectLastY(topic, subid, 50, 5); // stop current server scur.stop(); c410.close(); ccur.close(); // stop bookkeeper cluster bkccur.stop(); } /** * Test compatability between version 4.1.0 and the current version. * * A current client running message filter would fail on 4.1.0 hub servers. */ @Test(timeout=60000) public void testClientMessageFilterCompat410() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestUpdateMessageBoundCompat410"); ByteString subid = ByteString.copyFromUtf8("mysub"); // start bookkeeper BookKeeperCluster410 bkc410 = new BookKeeperCluster410(3); bkc410.start(); int port = PortManager.nextFreePort(); int sslPort = PortManager.nextFreePort(); // start hub server 410 Server410 s410 = new Server410(zkUtil.getZooKeeperConnectString(), port, sslPort); s410.start(); ClientCurrent ccur = new ClientCurrent("localhost:"+port+":"+sslPort); ccur.subscribe(topic, subid); ccur.closeSubscription(topic, subid); ccur.publishInts(topic, 0, 100); try { ccur.receiveNumModM(topic, subid, 0, 50, 2); fail("client-side filter could not run on 4.1.0 hub server"); } catch (Exception e) { logger.info("Should fail to run client-side message filter on 4.1.0 hub server.", e); ccur.closeSubscription(topic, subid); } // stop 410 server s410.stop(); // stop bookkeeper cluster bkc410.stop(); } /** * Test compatability between version 4.1.0 and the current version. * * Server side throttling does't work when current client connects to old version * server. */ @Test(timeout=60000) public void testServerSideThrottleCompat410() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestServerSideThrottleCompat410"); ByteString subid = ByteString.copyFromUtf8("mysub"); // start bookkeeper BookKeeperCluster410 bkc410 = new BookKeeperCluster410(3); bkc410.start(); int port = PortManager.nextFreePort(); int sslPort = PortManager.nextFreePort(); // start hub server 410 Server410 s410 = new Server410(zkUtil.getZooKeeperConnectString(), port, sslPort); s410.start(); ClientCurrent ccur = new ClientCurrent(false, "localhost:"+port+":"+sslPort); ccur.throttleX41(topic, subid, 10); ccur.close(); // stop 410 server s410.stop(); // stop bookkeeper cluster bkc410.stop(); } } TestPubSubServerStartup.java000066400000000000000000000122141244507361200347160ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server; import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.net.InetSocketAddress; import java.net.MalformedURLException; import junit.framework.Assert; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.server.LoggingExceptionHandler; import org.apache.bookkeeper.test.PortManager; import org.apache.commons.configuration.ConfigurationException; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.netty.PubSubServer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.server.NIOServerCnxnFactory; import org.apache.zookeeper.server.ZooKeeperServer; import org.apache.zookeeper.test.ClientBase; import org.junit.Test; public class TestPubSubServerStartup { private static Logger logger = LoggerFactory.getLogger(TestPubSubServerStartup.class); /** * Start-up zookeeper + pubsubserver reading from a config URL. Then stop * and cleanup. * * Loop over that. * * If the pubsub server does not wait for its zookeeper client to be * connected, the pubsub server will fail at startup. * */ @Test(timeout=60000) public void testPubSubServerInstantiationWithConfig() throws Exception { for (int i = 0; i < 10; i++) { logger.info("iteration " + i); instantiateAndDestroyPubSubServer(); } } private void instantiateAndDestroyPubSubServer() throws IOException, InterruptedException, ConfigurationException, MalformedURLException, Exception { int zkPort = PortManager.nextFreePort(); int hwPort = PortManager.nextFreePort(); int hwSSLPort = PortManager.nextFreePort(); String hedwigParams = "default_server_host=localhost:" + hwPort + "\n" + "zk_host=localhost:" + zkPort + "\n" + "server_port=" + hwPort + "\n" + "ssl_server_port=" + hwSSLPort + "\n" + "zk_timeout=2000\n"; File hedwigConfigFile = new File(System.getProperty("java.io.tmpdir") + "/hedwig.cfg"); writeStringToFile(hedwigParams, hedwigConfigFile); ClientBase.setupTestEnv(); File zkTmpDir = File.createTempFile("zookeeper", "test"); zkTmpDir.delete(); zkTmpDir.mkdir(); ZooKeeperServer zks = new ZooKeeperServer(zkTmpDir, zkTmpDir, zkPort); NIOServerCnxnFactory serverFactory = new NIOServerCnxnFactory(); serverFactory.configure(new InetSocketAddress(zkPort), 100); serverFactory.startup(zks); boolean b = ClientBase.waitForServerUp("127.0.0.1:" + zkPort, 5000); ServerConfiguration serverConf = new ServerConfiguration(); serverConf.loadConf(hedwigConfigFile.toURI().toURL()); logger.info("Zookeeper server up and running!"); ZooKeeper zkc = new ZooKeeper("127.0.0.1:" + zkPort, 5000, null); // initialize the zk client with (fake) values zkc.create("/ledgers", new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); zkc.create("/ledgers/available", new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); zkc.close(); PubSubServer hedwigServer = null; try { logger.info("starting hedwig broker!"); hedwigServer = new PubSubServer(serverConf, new ClientConfiguration(), new LoggingExceptionHandler()); hedwigServer.start(); } catch (Exception e) { e.printStackTrace(); } Assert.assertNotNull("failed to instantiate hedwig pub sub server", hedwigServer); hedwigServer.shutdown(); serverFactory.shutdown(); zks.shutdown(); zkTmpDir.delete(); ClientBase.waitForServerDown("localhost:" + zkPort, 10000); } public static void writeStringToFile(String string, File f) throws IOException { if (f.exists()) { if (!f.delete()) { throw new RuntimeException("cannot create file " + f.getAbsolutePath()); } } if (!f.createNewFile()) { throw new RuntimeException("cannot create new file " + f.getAbsolutePath()); } FileWriter fw = new FileWriter(f); fw.write(string); fw.close(); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/delivery/000077500000000000000000000000001244507361200311435ustar00rootroot00000000000000StubDeliveryManager.java000066400000000000000000000067411244507361200356530ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/delivery/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.delivery; import java.util.LinkedList; import java.util.Queue; import com.google.protobuf.ByteString; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.filter.ServerMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionEvent; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.util.Callback; public class StubDeliveryManager implements DeliveryManager { public static class StartServingRequest { public ByteString topic; public ByteString subscriberId; public MessageSeqId seqIdToStartFrom; public DeliveryEndPoint endPoint; public ServerMessageFilter filter; public StartServingRequest(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences, MessageSeqId seqIdToStartFrom, DeliveryEndPoint endPoint, ServerMessageFilter filter) { this.topic = topic; this.subscriberId = subscriberId; this.seqIdToStartFrom = seqIdToStartFrom; this.endPoint = endPoint; this.filter = filter; } } public Queue lastRequest = new LinkedList(); @Override public void startServingSubscription(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences, MessageSeqId seqIdToStartFrom, DeliveryEndPoint endPoint, ServerMessageFilter filter, Callback cb, Object ctx) { lastRequest.add(new StartServingRequest(topic, subscriberId, preferences, seqIdToStartFrom, endPoint, filter)); cb.operationFinished(ctx, null); } @Override public void stopServingSubscriber(ByteString topic, ByteString subscriberId, SubscriptionEvent event, Callback cb, Object ctx) { lastRequest.add(new TopicSubscriber(topic, subscriberId)); cb.operationFinished(ctx, null); } @Override public void messageConsumed(ByteString topic, ByteString subscriberId, MessageSeqId seqId) { // do nothing } @Override public void start() { } @Override public void stop() { // do nothing now } } TestFIFODeliveryManager.java000066400000000000000000000301671244507361200363200ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/delivery/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.delivery; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.CountDownLatch; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.filter.PipelineFilter; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.persistence.PersistRequest; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.persistence.StubPersistenceManager; import org.apache.hedwig.server.subscriptions.AllToAllTopologyFilter; import org.apache.hedwig.util.Callback; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; public class TestFIFODeliveryManager { static Logger logger = LoggerFactory.getLogger(TestFIFODeliveryManager.class); static class TestCallback implements Callback { AtomicBoolean success = new AtomicBoolean(false); final CountDownLatch latch; MessageSeqId msgid = null; TestCallback(CountDownLatch l) { this.latch = l; } public void operationFailed(Object ctx, PubSubException exception) { logger.error("Persist operation failed", exception); latch.countDown(); } public void operationFinished(Object ctx, MessageSeqId resultOfOperation) { msgid = resultOfOperation; success.set(true); latch.countDown(); } MessageSeqId getId() { assertTrue("Persist operation failed", success.get()); return msgid; } } /** * Delivery endpoint which puts all responses on a queue */ static class ExecutorDeliveryEndPointWithQueue implements DeliveryEndPoint { ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); AtomicInteger numResponses = new AtomicInteger(0); ConcurrentLinkedQueue queue = new ConcurrentLinkedQueue(); public void send(final PubSubResponse response, final DeliveryCallback callback) { logger.info("Received response {}", response); queue.add(response); numResponses.incrementAndGet(); executor.submit(new Runnable() { public void run() { callback.sendingFinished(); } }); } public void close() { executor.shutdown(); } PubSubResponse getNextResponse() { return queue.poll(); } int getNumResponses() { return numResponses.get(); } } /** * Test that the FIFO delivery manager executes stopServing and startServing * in the correct order * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-539} */ @Test public void testFIFODeliverySubCloseSubRace() throws Exception { ServerConfiguration conf = new ServerConfiguration(); ByteString topic = ByteString.copyFromUtf8("subRaceTopic"); ByteString subscriber = ByteString.copyFromUtf8("subRaceSubscriber"); PersistenceManager pm = new StubPersistenceManager(); FIFODeliveryManager fdm = new FIFODeliveryManager(pm, conf); ExecutorDeliveryEndPointWithQueue dep = new ExecutorDeliveryEndPointWithQueue(); SubscriptionPreferences prefs = SubscriptionPreferences.newBuilder().build(); PipelineFilter filter = new PipelineFilter(); filter.addLast(new AllToAllTopologyFilter()); filter.initialize(conf.getConf()); filter.setSubscriptionPreferences(topic, subscriber, prefs); MessageSeqId startId = MessageSeqId.newBuilder().setLocalComponent(1).build(); CountDownLatch l = new CountDownLatch(1); Message m = Message.newBuilder().setBody(ByteString.copyFromUtf8(String.valueOf(1))).build(); TestCallback cb = new TestCallback(l); pm.persistMessage(new PersistRequest(topic, m, cb, null)); assertTrue("Persistence never finished", l.await(10, TimeUnit.SECONDS)); final CountDownLatch oplatch = new CountDownLatch(3); fdm.start(); fdm.startServingSubscription(topic, subscriber, prefs, startId, dep, filter, new Callback() { @Override public void operationFinished(Object ctx, Void result) { oplatch.countDown(); } @Override public void operationFailed(Object ctx, PubSubException exception) { oplatch.countDown(); } }, null); fdm.stopServingSubscriber(topic, subscriber, null, new Callback() { @Override public void operationFinished(Object ctx, Void result) { oplatch.countDown(); } @Override public void operationFailed(Object ctx, PubSubException exception) { oplatch.countDown(); } }, null); fdm.startServingSubscription(topic, subscriber, prefs, startId, dep, filter, new Callback() { @Override public void operationFinished(Object ctx, Void result) { oplatch.countDown(); } @Override public void operationFailed(Object ctx, PubSubException exception) { oplatch.countDown(); } }, null); assertTrue("Ops never finished", oplatch.await(10, TimeUnit.SECONDS)); int seconds = 5; while (dep.getNumResponses() < 2) { if (seconds-- == 0) { break; } Thread.sleep(1000); } PubSubResponse r = dep.getNextResponse(); assertNotNull("There should be a response", r); assertTrue("Response should contain a message", r.hasMessage()); r = dep.getNextResponse(); assertNotNull("There should be a response", r); assertTrue("Response should contain a message", r.hasMessage()); r = dep.getNextResponse(); assertNull("There should only be 2 responses", r); } static class ExecutorDeliveryEndPoint implements DeliveryEndPoint { ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); AtomicInteger numDelivered = new AtomicInteger(); final DeliveryManager dm; ExecutorDeliveryEndPoint(DeliveryManager dm) { this.dm = dm; } public void send(final PubSubResponse response, final DeliveryCallback callback) { executor.submit(new Runnable() { public void run() { if (response.hasMessage()) { MessageSeqId msgid = response.getMessage().getMsgId(); if ((msgid.getLocalComponent() % 2) == 1) { dm.messageConsumed(response.getTopic(), response.getSubscriberId(), response.getMessage().getMsgId()); } else { executor.schedule(new Runnable() { public void run() { dm.messageConsumed(response.getTopic(), response.getSubscriberId(), response.getMessage().getMsgId()); } }, 1, TimeUnit.SECONDS); } } numDelivered.incrementAndGet(); callback.sendingFinished(); } }); } public void close() { executor.shutdown(); } int getNumDelivered() { return numDelivered.get(); } } /** * Test throttle race issue cause by messageConsumed and doDeliverNextMessage * {@link https://issues.apache.org/jira/browse/BOOKKEEPER-503} */ @Test public void testFIFODeliveryThrottlingRace() throws Exception { final int numMessages = 20; final int throttleSize = 10; ServerConfiguration conf = new ServerConfiguration() { @Override public int getDefaultMessageWindowSize() { return throttleSize; } }; ByteString topic = ByteString.copyFromUtf8("throttlingRaceTopic"); ByteString subscriber = ByteString.copyFromUtf8("throttlingRaceSubscriber"); PersistenceManager pm = new StubPersistenceManager(); FIFODeliveryManager fdm = new FIFODeliveryManager(pm, conf); ExecutorDeliveryEndPoint dep = new ExecutorDeliveryEndPoint(fdm); SubscriptionPreferences prefs = SubscriptionPreferences.newBuilder().build(); PipelineFilter filter = new PipelineFilter(); filter.addLast(new AllToAllTopologyFilter()); filter.initialize(conf.getConf()); filter.setSubscriptionPreferences(topic, subscriber, prefs); CountDownLatch l = new CountDownLatch(numMessages); TestCallback firstCallback = null; for (int i = 0; i < numMessages; i++) { Message m = Message.newBuilder().setBody(ByteString.copyFromUtf8(String.valueOf(i))).build(); TestCallback cb = new TestCallback(l); if (firstCallback == null) { firstCallback = cb; } pm.persistMessage(new PersistRequest(topic, m, cb, null)); } fdm.start(); assertTrue("Persistence never finished", l.await(10, TimeUnit.SECONDS)); fdm.startServingSubscription(topic, subscriber, prefs, firstCallback.getId(), dep, filter, new Callback() { @Override public void operationFinished(Object ctx, Void result) { } @Override public void operationFailed(Object ctx, PubSubException exception) { // would not happened } }, null); int count = 30; // wait for 30 seconds maximum while (dep.getNumDelivered() < numMessages) { Thread.sleep(1000); if (count-- == 0) { break; } } assertEquals("Should have delivered " + numMessages, numMessages, dep.getNumDelivered()); } } TestThrottlingDelivery.java000066400000000000000000000362251244507361200364410ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/delivery/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.delivery; import java.io.IOException; import java.util.Arrays; import java.util.Collection; import java.util.Map; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; import static org.junit.Assert.assertEquals; import com.google.protobuf.ByteString; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.filter.ClientMessageFilter; import org.apache.hedwig.filter.MessageFilterBase; import org.apache.hedwig.filter.ServerMessageFilter; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageHeader; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.HedwigHubTestBase; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.Callback; @RunWith(Parameterized.class) public class TestThrottlingDelivery extends HedwigHubTestBase { private static final int DEFAULT_MESSAGE_WINDOW_SIZE = 10; private static final String OPT_MOD = "MOD"; static class ModMessageFilter implements ServerMessageFilter, ClientMessageFilter { int mod; @Override public MessageFilterBase setSubscriptionPreferences(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences) { Map userOptions = SubscriptionStateUtils.buildUserOptions(preferences); ByteString modValue = userOptions.get(OPT_MOD); if (null == modValue) { mod = 0; } else { mod = Integer.valueOf(modValue.toStringUtf8()); } return this; } @Override public boolean testMessage(Message message) { int value = Integer.valueOf(message.getBody().toStringUtf8()); return 0 == value % mod; } @Override public ServerMessageFilter initialize(Configuration conf) throws ConfigurationException, IOException { // do nothing return this; } @Override public void uninitialize() { // do nothing } } protected class ThrottleDeliveryServerConfiguration extends HubServerConfiguration { ThrottleDeliveryServerConfiguration(int serverPort, int sslServerPort) { super(serverPort, sslServerPort); } @Override public int getDefaultMessageWindowSize() { return TestThrottlingDelivery.DEFAULT_MESSAGE_WINDOW_SIZE; } } protected class ThrottleDeliveryClientConfiguration extends HubClientConfiguration { int messageWindowSize; ThrottleDeliveryClientConfiguration() { this(TestThrottlingDelivery.DEFAULT_MESSAGE_WINDOW_SIZE); } ThrottleDeliveryClientConfiguration(int messageWindowSize) { this.messageWindowSize = messageWindowSize; } @Override public int getMaximumOutstandingMessages() { return messageWindowSize; } void setMessageWindowSize(int messageWindowSize) { this.messageWindowSize = messageWindowSize; } @Override public boolean isAutoSendConsumeMessageEnabled() { return false; } @Override public boolean isSubscriptionChannelSharingEnabled() { return isSubscriptionChannelSharingEnabled; } } private void publishNums(Publisher pub, ByteString topic, int start, int num, int M) throws Exception { for (int i = 1; i <= num; i++) { PubSubProtocol.Map.Builder propsBuilder = PubSubProtocol.Map.newBuilder().addEntries( PubSubProtocol.Map.Entry.newBuilder().setKey(OPT_MOD) .setValue(ByteString.copyFromUtf8(String.valueOf((start + i) % M)))); MessageHeader.Builder headerBuilder = MessageHeader.newBuilder().setProperties(propsBuilder); Message msg = Message.newBuilder().setBody(ByteString.copyFromUtf8(String.valueOf(start + i))) .setHeader(headerBuilder).build(); pub.publish(topic, msg); } } private void throttleWithFilter(Publisher pub, final Subscriber sub, ByteString topic, ByteString subid, final int X) throws Exception { // publish numbers with header (so only 3 messages would be delivered) publishNums(pub, topic, 0, 3 * X, X); // subscribe the topic with filter PubSubProtocol.Map userOptions = PubSubProtocol.Map .newBuilder() .addEntries( PubSubProtocol.Map.Entry.newBuilder().setKey(OPT_MOD) .setValue(ByteString.copyFromUtf8(String.valueOf(X)))).build(); SubscriptionOptions opts = SubscriptionOptions.newBuilder().setCreateOrAttach(CreateOrAttach.ATTACH) .setOptions(userOptions).setMessageFilter(ModMessageFilter.class.getName()).build(); sub.subscribe(topic, subid, opts); final AtomicInteger expected = new AtomicInteger(X); final CountDownLatch latch = new CountDownLatch(1); sub.startDelivery(topic, subid, new MessageHandler() { @Override public synchronized void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); logger.debug("Received message {},", value); if (value == expected.get()) { expected.addAndGet(X); } else { // error condition logger.error("Did not receive expected value, expected {}, got {}", expected.get(), value); expected.set(0); latch.countDown(); } if (value == 3 * X) { latch.countDown(); } callback.operationFinished(context, null); sub.consume(topic, subscriberId, msg.getMsgId()); } catch (Exception e) { logger.error("Received bad message", e); latch.countDown(); } } }); assertTrue("Timed out waiting for messages " + 3 * X, latch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected message with " + 4 * X, 4 * X, expected.get()); sub.stopDelivery(topic, subid); sub.closeSubscription(topic, subid); } private void throttleX(Publisher pub, final Subscriber sub, ByteString topic, ByteString subid, final int X) throws Exception { for (int i=1; i<=3*X; i++) { pub.publish(topic, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); } sub.subscribe(topic, subid, CreateOrAttach.ATTACH); final AtomicInteger expected = new AtomicInteger(1); final CountDownLatch throttleLatch = new CountDownLatch(1); final CountDownLatch nonThrottleLatch = new CountDownLatch(1); sub.startDelivery(topic, subid, new MessageHandler() { @Override public synchronized void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); logger.debug("Received message {},", value); if (value == expected.get()) { expected.incrementAndGet(); } else { // error condition logger.error("Did not receive expected value, expected {}, got {}", expected.get(), value); expected.set(0); throttleLatch.countDown(); nonThrottleLatch.countDown(); } if (expected.get() > X+1) { throttleLatch.countDown(); } if (expected.get() == (3 * X + 1)) { nonThrottleLatch.countDown(); } callback.operationFinished(context, null); if (expected.get() > X + 1) { sub.consume(topic, subscriberId, msg.getMsgId()); } } catch (Exception e) { logger.error("Received bad message", e); throttleLatch.countDown(); nonThrottleLatch.countDown(); } } }); assertFalse("Received more messages than throttle value " + X, throttleLatch.await(3, TimeUnit.SECONDS)); assertEquals("Should be expected messages with only " + (X+1), X+1, expected.get()); // consume messages to not throttle it for (int i=1; i<=X; i++) { sub.consume(topic, subid, MessageSeqId.newBuilder().setLocalComponent(i).build()); } assertTrue("Timed out waiting for messages " + (3*X + 1), nonThrottleLatch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected message with " + (3*X + 1), 3*X + 1, expected.get()); sub.stopDelivery(topic, subid); sub.closeSubscription(topic, subid); } @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { false }, { true } }); } protected boolean isSubscriptionChannelSharingEnabled; public TestThrottlingDelivery(boolean isSubscriptionChannelSharingEnabled) { super(1); this.isSubscriptionChannelSharingEnabled = isSubscriptionChannelSharingEnabled; } @Override @Before public void setUp() throws Exception { super.setUp(); } @Override protected ServerConfiguration getServerConfiguration(int port, int sslPort) { return new ThrottleDeliveryServerConfiguration(port, sslPort); } @Test(timeout=60000) public void testServerSideThrottle() throws Exception { int messageWindowSize = DEFAULT_MESSAGE_WINDOW_SIZE; ThrottleDeliveryClientConfiguration conf = new ThrottleDeliveryClientConfiguration(); HedwigClient client = new HedwigClient(conf); Publisher pub = client.getPublisher(); Subscriber sub = client.getSubscriber(); ByteString topic = ByteString.copyFromUtf8("testServerSideThrottle"); ByteString subid = ByteString.copyFromUtf8("serverThrottleSub"); sub.subscribe(topic, subid, CreateOrAttach.CREATE); sub.closeSubscription(topic, subid); // throttle with hub server's setting throttleX(pub, sub, topic, subid, DEFAULT_MESSAGE_WINDOW_SIZE); messageWindowSize = DEFAULT_MESSAGE_WINDOW_SIZE / 2; // throttle with a lower value than hub server's setting SubscriptionOptions.Builder optionsBuilder = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE) .setMessageWindowSize(messageWindowSize); topic = ByteString.copyFromUtf8("testServerSideThrottleWithLowerValue"); sub.subscribe(topic, subid, optionsBuilder.build()); sub.closeSubscription(topic, subid); throttleX(pub, sub, topic, subid, messageWindowSize); messageWindowSize = DEFAULT_MESSAGE_WINDOW_SIZE + 5; // throttle with a higher value than hub server's setting optionsBuilder = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE) .setMessageWindowSize(messageWindowSize); topic = ByteString.copyFromUtf8("testServerSideThrottleWithHigherValue"); sub.subscribe(topic, subid, optionsBuilder.build()); sub.closeSubscription(topic, subid); throttleX(pub, sub, topic, subid, messageWindowSize); client.close(); } @Test(timeout = 60000) public void testThrottleWithServerSideFilter() throws Exception { int messageWindowSize = DEFAULT_MESSAGE_WINDOW_SIZE; ThrottleDeliveryClientConfiguration conf = new ThrottleDeliveryClientConfiguration(); HedwigClient client = new HedwigClient(conf); Publisher pub = client.getPublisher(); Subscriber sub = client.getSubscriber(); ByteString topic = ByteString.copyFromUtf8("testThrottleWithServerSideFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); SubscriptionOptions opts = SubscriptionOptions.newBuilder().setCreateOrAttach(CreateOrAttach.CREATE).build(); sub.subscribe(topic, subid, opts); sub.closeSubscription(topic, subid); // message gap: half of the throttle threshold throttleWithFilter(pub, sub, topic, subid, messageWindowSize / 2); // message gap: equals to the throttle threshold throttleWithFilter(pub, sub, topic, subid, messageWindowSize); // message gap: larger than the throttle threshold throttleWithFilter(pub, sub, topic, subid, messageWindowSize + messageWindowSize / 2); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/filter/000077500000000000000000000000001244507361200306055ustar00rootroot00000000000000TestMessageFilter.java000066400000000000000000000410211244507361200347610ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/filter/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.filter; import org.junit.After; import org.junit.Before; import org.junit.Test; import static org.junit.Assert.*; import java.io.IOException; import java.util.Map; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import org.apache.bookkeeper.util.ReflectionUtils; import org.apache.commons.configuration.Configuration; import org.apache.commons.configuration.ConfigurationException; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.MessageHandler; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageHeader; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionOptions; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionPreferences; import org.apache.hedwig.protoextensions.MapUtils; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.client.api.Client; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.filter.ClientMessageFilter; import org.apache.hedwig.filter.MessageFilterBase; import org.apache.hedwig.filter.ServerMessageFilter; import org.apache.hedwig.util.Callback; import org.apache.hedwig.server.HedwigHubTestBase; public class TestMessageFilter extends HedwigHubTestBase { // Client side variables protected ClientConfiguration conf; protected HedwigClient client; protected Publisher publisher; protected Subscriber subscriber; static final String OPT_MOD = "MOD"; static class ModMessageFilter implements ServerMessageFilter, ClientMessageFilter { int mod; @Override public ServerMessageFilter initialize(Configuration conf) { // do nothing return this; } @Override public void uninitialize() { // do nothing; } @Override public MessageFilterBase setSubscriptionPreferences(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences) { Map userOptions = SubscriptionStateUtils.buildUserOptions(preferences); ByteString modValue = userOptions.get(OPT_MOD); if (null == modValue) { mod = 0; } else { mod = Integer.valueOf(modValue.toStringUtf8()); } return this; } @Override public boolean testMessage(Message msg) { int value = Integer.valueOf(msg.getBody().toStringUtf8()); return 0 == value % mod; } } static class HeaderMessageFilter implements ServerMessageFilter, ClientMessageFilter { int mod; @Override public ServerMessageFilter initialize(Configuration conf) { // do nothing return this; } @Override public void uninitialize() { // do nothing } @Override public MessageFilterBase setSubscriptionPreferences(ByteString topic, ByteString subscriberId, SubscriptionPreferences preferences) { // do nothing now return this; } @Override public boolean testMessage(Message msg) { if (msg.hasHeader()) { MessageHeader header = msg.getHeader(); if (header.hasProperties()) { Map props = MapUtils.buildMap(header.getProperties()); ByteString value = props.get(OPT_MOD); if (null == value) { return false; } int intValue = Integer.valueOf(value.toStringUtf8()); if (0 != intValue) { return false; } return true; } else { return false; } } else { return false; } } } public TestMessageFilter() { super(1); } @Override @Before public void setUp() throws Exception { super.setUp(); conf = new HubClientConfiguration() { @Override public boolean isAutoSendConsumeMessageEnabled() { return false; } }; client = new HedwigClient(conf); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } @Override @After public void tearDown() throws Exception { client.close(); super.tearDown(); } private void publishNums(ByteString topic, int start, int num, int M) throws Exception { for (int i=1; i<=num; i++) { PubSubProtocol.Map.Builder propsBuilder = PubSubProtocol.Map.newBuilder() .addEntries(PubSubProtocol.Map.Entry.newBuilder().setKey(OPT_MOD) .setValue(ByteString.copyFromUtf8(String.valueOf((start + i) % M)))); MessageHeader.Builder headerBuilder = MessageHeader.newBuilder().setProperties(propsBuilder); Message msg = Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf((start + i)))) .setHeader(headerBuilder).build(); publisher.publish(topic, msg); } } private void receiveNumModM(final ByteString topic, final ByteString subid, final String filterClassName, final ClientMessageFilter filter, final int start, final int num, final int M, final boolean consume) throws Exception { PubSubProtocol.Map userOptions = PubSubProtocol.Map.newBuilder() .addEntries(PubSubProtocol.Map.Entry.newBuilder().setKey(OPT_MOD) .setValue(ByteString.copyFromUtf8(String.valueOf(M)))).build(); SubscriptionOptions.Builder optionsBuilder = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.ATTACH) .setOptions(userOptions); if (null != filterClassName) { optionsBuilder.setMessageFilter(filterClassName); } subscriber.subscribe(topic, subid, optionsBuilder.build()); final int base = start + M - start % M; final AtomicInteger expected = new AtomicInteger(base); final CountDownLatch latch = new CountDownLatch(1); MessageHandler msgHandler = new MessageHandler() { synchronized public void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); // duplicated messages received, ignore them if (value > start) { if (value == expected.get()) { expected.addAndGet(M); } else { logger.error("Did not receive expected value, expected {}, got {}", expected.get(), value); expected.set(0); latch.countDown(); } if (expected.get() == (base + num * M)) { latch.countDown(); } } callback.operationFinished(context, null); if (consume) { subscriber.consume(topic, subid, msg.getMsgId()); } } catch (Exception e) { logger.error("Received bad message", e); latch.countDown(); } } }; if (null != filter) { subscriber.startDeliveryWithFilter(topic, subid, msgHandler, filter); } else { subscriber.startDelivery(topic, subid, msgHandler); } assertTrue("Timed out waiting for messages mod " + M + " expected is " + expected.get(), latch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected message with " + (base + num * M), (base + num*M), expected.get()); subscriber.stopDelivery(topic, subid); subscriber.closeSubscription(topic, subid); } @Test(timeout=60000) public void testServerSideMessageFilter() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestMessageFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.closeSubscription(topic, subid); publishNums(topic, 0, 100, 2); receiveNumModM(topic, subid, ModMessageFilter.class.getName(), null, 0, 50, 2, true); } @Test(timeout=60000) public void testInvalidServerSideMessageFilter() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestInvalidMessageFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); SubscriptionOptions options = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE_OR_ATTACH) .setMessageFilter("Invalid_Message_Filter").build(); try { subscriber.subscribe(topic, subid, options); // coun't reach here fail("Should fail subscribe with invalid message filter"); } catch (PubSubException pse) { assertTrue("Should respond with INVALID_MESSAGE_FILTER", pse.getMessage().contains("INVALID_MESSAGE_FILTER")); } } @Test(timeout=60000) public void testChangeSubscriptionPreferences() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestChangeSubscriptionPreferences"); ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.closeSubscription(topic, subid); publishNums(topic, 0, 100, 2); receiveNumModM(topic, subid, ModMessageFilter.class.getName(), null, 0, 50, 2, false); receiveNumModM(topic, subid, ModMessageFilter.class.getName(), null, 0, 25, 4, false); receiveNumModM(topic, subid, ModMessageFilter.class.getName(), null, 0, 33, 3, true); // change mod to receive numbers mod 5 publishNums(topic, 100, 100, 5); receiveNumModM(topic, subid, ModMessageFilter.class.getName(), null, 100, 20, 5, true); // change mod to receive numbers mod 7 publishNums(topic, 200, 100, 7); receiveNumModM(topic, subid, ModMessageFilter.class.getName(), null, 200, 14, 7, true); } @Test(timeout=60000) public void testChangeServerSideMessageFilter() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestChangeMessageFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.closeSubscription(topic, subid); publishNums(topic, 0, 100, 3); receiveNumModM(topic, subid, ModMessageFilter.class.getName(), null, 0, 50, 2, false); receiveNumModM(topic, subid, ModMessageFilter.class.getName(), null, 0, 25, 4, false); receiveNumModM(topic, subid, HeaderMessageFilter.class.getName(), null, 0, 33, 3, true); publishNums(topic, 200, 100, 7); receiveNumModM(topic, subid, HeaderMessageFilter.class.getName(), null, 200, 14, 7, true); } @Test(timeout=60000) public void testFixInvalidServerSideMessageFilter() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestFixMessageFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.closeSubscription(topic, subid); publishNums(topic, 0, 100, 3); try { receiveNumModM(topic, subid, "Invalid_Message_Filter", null, 0, 33, 3, true); // coun't reach here fail("Should fail subscribe with invalid message filter"); } catch (Exception pse) { assertTrue("Should respond with INVALID_MESSAGE_FILTER", pse.getMessage().contains("INVALID_MESSAGE_FILTER")); } receiveNumModM(topic, subid, HeaderMessageFilter.class.getName(), null, 0, 33, 3, true); } @Test(timeout=60000) public void testNullClientMessageFilter() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestNullClientMessageFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); try { subscriber.startDeliveryWithFilter(topic, subid, null, new ModMessageFilter()); fail("Should fail start delivery with filter using null message handler."); } catch (NullPointerException npe) { } try { subscriber.startDeliveryWithFilter(topic, subid, new MessageHandler() { public void deliver(ByteString topic, ByteString subscriberId, Message msg, Callback callback, Object context) { // do nothing } }, null); fail("Should fail start delivery with filter using null message filter."); } catch (NullPointerException npe) { } } @Test(timeout=60000) public void testClientSideMessageFilter() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestClientMessageFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.closeSubscription(topic, subid); publishNums(topic, 0, 100, 2); receiveNumModM(topic, subid, null, new ModMessageFilter(), 0, 50, 2, true); } @Test(timeout=60000) public void testChangeSubscriptionPreferencesForClientFilter() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestChangeSubscriptionPreferencesForClientFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.closeSubscription(topic, subid); publishNums(topic, 0, 100, 2); receiveNumModM(topic, subid, null, new ModMessageFilter(), 0, 50, 2, false); receiveNumModM(topic, subid, null, new ModMessageFilter(), 0, 25, 4, false); receiveNumModM(topic, subid, null, new ModMessageFilter(), 0, 33, 3, true); } @Test(timeout=60000) public void testChangeClientSideMessageFilter() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestChangeClientSideMessageFilter"); ByteString subid = ByteString.copyFromUtf8("mysub"); subscriber.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); subscriber.closeSubscription(topic, subid); publishNums(topic, 0, 100, 3); receiveNumModM(topic, subid, null, new ModMessageFilter(), 0, 50, 2, false); receiveNumModM(topic, subid, null, new HeaderMessageFilter(), 0, 33, 3, true); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/handlers/000077500000000000000000000000001244507361200311205ustar00rootroot00000000000000TestBaseHandler.java000066400000000000000000000075261244507361200347260ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import java.util.List; import junit.framework.TestCase; import org.jboss.netty.channel.Channel; import org.junit.Before; import org.junit.Test; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.netty.WriteRecordingChannel; import org.apache.hedwig.server.topics.StubTopicManager; import org.apache.hedwig.server.topics.TopicManager; public class TestBaseHandler extends TestCase { MyBaseHandler handler; StubTopicManager tm; PubSubRequest request = PubSubRequest.getDefaultInstance(); WriteRecordingChannel channel = new WriteRecordingChannel(); protected class MyBaseHandler extends BaseHandler { public MyBaseHandler(TopicManager tm, ServerConfiguration conf) { super(tm, conf); } PubSubRequest request; public PubSubRequest getRequest() { return request; } @Override public void handleRequestAtOwner(PubSubRequest request, Channel channel) { this.request = request; } } @Override @Before public void setUp() throws Exception { ServerConfiguration conf = new ServerConfiguration(); tm = new StubTopicManager(conf); handler = new MyBaseHandler(tm, conf); request = PubSubRequest.getDefaultInstance(); channel = new WriteRecordingChannel(); } public PubSubResponse getPubSubResponse(WriteRecordingChannel channel) { List messages = channel.getMessagesWritten(); assertEquals(messages.size(), 1); Object message = messages.get(0); assertEquals(message.getClass(), PubSubResponse.class); return (PubSubResponse) message; } @Test(timeout=60000) public void testHandleRequestOnRedirect() throws Exception { tm.setShouldOwnEveryNewTopic(false); handler.handleRequest(request, channel); PubSubResponse response = getPubSubResponse(channel); assertEquals(response.getStatusCode(), StatusCode.NOT_RESPONSIBLE_FOR_TOPIC); assertEquals(request.getTxnId(), response.getTxnId()); assertNull(handler.getRequest()); } @Test(timeout=60000) public void testHandleRequestOnOwner() throws Exception { tm.setShouldOwnEveryNewTopic(true); handler.handleRequest(request, channel); assertEquals(0, channel.getMessagesWritten().size()); assertEquals(handler.getRequest(), request); } @Test(timeout=60000) public void testHandleRequestOnError() throws Exception { tm.setShouldError(true); handler.handleRequest(request, channel); PubSubResponse response = getPubSubResponse(channel); assertEquals(response.getStatusCode(), StatusCode.SERVICE_DOWN); assertEquals(request.getTxnId(), response.getTxnId()); assertNull(handler.getRequest()); } } TestSubUnsubHandler.java000066400000000000000000000213551244507361200356160ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/handlers/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.handlers; import java.util.HashSet; import java.util.Set; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import org.junit.Test; import com.google.protobuf.ByteString; import org.apache.hedwig.StubCallback; import org.apache.hedwig.client.data.TopicSubscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.filter.PipelineFilter; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.ProtocolVersion; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protocol.PubSubProtocol.UnsubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.delivery.ChannelEndPoint; import org.apache.hedwig.server.delivery.StubDeliveryManager; import org.apache.hedwig.server.delivery.StubDeliveryManager.StartServingRequest; import org.apache.hedwig.server.netty.WriteRecordingChannel; import org.apache.hedwig.server.persistence.LocalDBPersistenceManager; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.subscriptions.AllToAllTopologyFilter; import org.apache.hedwig.server.subscriptions.StubSubscriptionManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.server.topics.TrivialOwnAllTopicManager; import org.apache.hedwig.util.ConcurrencyUtils; import junit.framework.TestCase; public class TestSubUnsubHandler extends TestCase { SubscribeHandler sh; StubDeliveryManager dm; StubSubscriptionManager sm; SubscriptionChannelManager subChannelMgr; ByteString topic = ByteString.copyFromUtf8("topic"); WriteRecordingChannel channel; SubscribeRequest subRequestPrototype; PubSubRequest pubSubRequestPrototype; ByteString subscriberId; UnsubscribeHandler ush; @Override protected void setUp() throws Exception { super.setUp(); ServerConfiguration conf = new ServerConfiguration(); ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); TopicManager tm = new TrivialOwnAllTopicManager(conf, executor); dm = new StubDeliveryManager(); PersistenceManager pm = LocalDBPersistenceManager.instance(); sm = new StubSubscriptionManager(tm, pm, dm, conf, executor); subChannelMgr = new SubscriptionChannelManager(); sh = new SubscribeHandler(conf, tm, dm, pm, sm, subChannelMgr); channel = new WriteRecordingChannel(); subscriberId = ByteString.copyFromUtf8("subId"); subRequestPrototype = SubscribeRequest.newBuilder().setSubscriberId(subscriberId).build(); pubSubRequestPrototype = PubSubRequest.newBuilder().setProtocolVersion(ProtocolVersion.VERSION_ONE).setType( OperationType.SUBSCRIBE).setTxnId(0).setTopic(topic).setSubscribeRequest(subRequestPrototype).build(); ush = new UnsubscribeHandler(conf, tm, sm, dm, subChannelMgr); } @Test(timeout=60000) public void testNoSubscribeRequest() { sh.handleRequestAtOwner(PubSubRequest.newBuilder(pubSubRequestPrototype).clearSubscribeRequest().build(), channel); assertEquals(StatusCode.MALFORMED_REQUEST, ((PubSubResponse) channel.getMessagesWritten().get(0)) .getStatusCode()); } @Test(timeout=60000) public void testSuccessCase() { StubCallback callback = new StubCallback(); sm.acquiredTopic(topic, callback, null); assertNull(ConcurrencyUtils.take(callback.queue).right()); sh.handleRequestAtOwner(pubSubRequestPrototype, channel); assertEquals(StatusCode.SUCCESS, ((PubSubResponse) channel.getMessagesWritten().get(0)).getStatusCode()); // make sure the channel was put in the maps Set topicSubs = new HashSet(); topicSubs.add(new TopicSubscriber(topic, subscriberId)); assertEquals(topicSubs, subChannelMgr.channel2sub.get(channel)); assertEquals(channel, subChannelMgr.sub2Channel.get(new TopicSubscriber(topic, subscriberId))); // make sure delivery was started StartServingRequest startRequest = (StartServingRequest) dm.lastRequest.poll(); assertEquals(channel, ((ChannelEndPoint) startRequest.endPoint).getChannel()); assertEquals(PipelineFilter.class, startRequest.filter.getClass()); PipelineFilter pfilter = (PipelineFilter)(startRequest.filter); assertEquals(1, pfilter.size()); assertEquals(AllToAllTopologyFilter.class, pfilter.getFirst().getClass()); assertEquals(1, startRequest.seqIdToStartFrom.getLocalComponent()); assertEquals(subscriberId, startRequest.subscriberId); assertEquals(topic, startRequest.topic); // make sure subscription was registered StubCallback callback1 = new StubCallback(); sm.serveSubscribeRequest(topic, SubscribeRequest.newBuilder(subRequestPrototype).setCreateOrAttach( CreateOrAttach.CREATE).build(), MessageSeqId.newBuilder().setLocalComponent(10).build(), callback1, null); assertEquals(PubSubException.ClientAlreadySubscribedException.class, ConcurrencyUtils.take(callback1.queue) .right().getClass()); // trying to subscribe again should throw an error WriteRecordingChannel dupChannel = new WriteRecordingChannel(); sh.handleRequestAtOwner(pubSubRequestPrototype, dupChannel); assertEquals(StatusCode.TOPIC_BUSY, ((PubSubResponse) dupChannel.getMessagesWritten().get(0)).getStatusCode()); // after disconnecting the channel, subscribe should work again subChannelMgr.channelDisconnected(channel); dupChannel = new WriteRecordingChannel(); sh.handleRequestAtOwner(pubSubRequestPrototype, dupChannel); assertEquals(StatusCode.SUCCESS, ((PubSubResponse) dupChannel.getMessagesWritten().get(0)).getStatusCode()); // test unsubscribe channel = new WriteRecordingChannel(); ush.handleRequestAtOwner(pubSubRequestPrototype, channel); assertEquals(StatusCode.MALFORMED_REQUEST, ((PubSubResponse) channel.getMessagesWritten().get(0)) .getStatusCode()); PubSubRequest unsubRequest = PubSubRequest.newBuilder(pubSubRequestPrototype).setUnsubscribeRequest( UnsubscribeRequest.newBuilder().setSubscriberId(subscriberId)).build(); channel = new WriteRecordingChannel(); dm.lastRequest.clear(); ush.handleRequestAtOwner(unsubRequest, channel); assertEquals(StatusCode.SUCCESS, ((PubSubResponse) channel.getMessagesWritten().get(0)).getStatusCode()); // make sure delivery has been stopped assertEquals(new TopicSubscriber(topic, subscriberId), dm.lastRequest.poll()); // make sure the info is gone from the sm StubCallback callback2 = new StubCallback(); sm.serveSubscribeRequest(topic, SubscribeRequest.newBuilder(subRequestPrototype).setCreateOrAttach( CreateOrAttach.ATTACH).build(), MessageSeqId.newBuilder().setLocalComponent(10).build(), callback2, null); assertEquals(PubSubException.ClientNotSubscribedException.class, ConcurrencyUtils.take(callback2.queue).right() .getClass()); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/integration/000077500000000000000000000000001244507361200316435ustar00rootroot00000000000000TestHedwigHub.java000066400000000000000000000771631244507361200351530ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/integration/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.integration; import java.net.InetSocketAddress; import java.util.Arrays; import java.util.Collection; import java.util.HashSet; import java.util.concurrent.SynchronousQueue; import org.junit.After; import org.junit.Before; import org.junit.Test; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.exceptions.InvalidSubscriberIdException; import org.apache.hedwig.client.exceptions.AlreadyStartDeliveryException; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.Client; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.ClientNotSubscribedException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.OperationType; import org.apache.hedwig.protocol.PubSubProtocol.ProtocolVersion; import org.apache.hedwig.protocol.PubSubProtocol.PubSubRequest; import org.apache.hedwig.protocol.PubSubProtocol.PubSubResponse; import org.apache.hedwig.protocol.PubSubProtocol.StartDeliveryRequest; import org.apache.hedwig.protocol.PubSubProtocol.StopDeliveryRequest; import org.apache.hedwig.protocol.PubSubProtocol.StatusCode; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protoextensions.SubscriptionStateUtils; import org.apache.hedwig.server.HedwigHubTestBase; import org.apache.hedwig.server.netty.WriteRecordingChannel; import org.apache.hedwig.server.proxy.HedwigProxy; import org.apache.hedwig.server.proxy.ProxyConfiguration; import org.apache.hedwig.server.regions.HedwigHubClient; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.bookkeeper.test.PortManager; import org.apache.hedwig.server.LoggingExceptionHandler; public abstract class TestHedwigHub extends HedwigHubTestBase { // Client side variables protected HedwigClient client; protected Publisher publisher; protected Subscriber subscriber; // Common ByteStrings used in tests. private final ByteString localSubscriberId = ByteString.copyFromUtf8("LocalSubscriber"); private final ByteString hubSubscriberId = ByteString.copyFromUtf8(SubscriptionStateUtils.HUB_SUBSCRIBER_PREFIX + "HubSubcriber"); enum Mode { REGULAR, PROXY, SSL }; protected Mode mode; protected boolean isSubscriptionChannelSharingEnabled; public TestHedwigHub(Mode mode, boolean isSubscriptionChannelSharingEnabled) { super(3); this.mode = mode; this.isSubscriptionChannelSharingEnabled = isSubscriptionChannelSharingEnabled; } protected HedwigProxy proxy; protected ProxyConfiguration proxyConf = new ProxyConfiguration() { final int proxyPort = PortManager.nextFreePort(); @Override public HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return serverAddresses.get(0); } @Override public int getProxyPort() { return proxyPort; } }; // SynchronousQueues to verify async calls private final SynchronousQueue queue = new SynchronousQueue(); private final SynchronousQueue consumeQueue = new SynchronousQueue(); // Test implementation of Callback for async client actions. static class TestCallback implements Callback { private final SynchronousQueue queue; public TestCallback(SynchronousQueue queue) { this.queue = queue; } @Override public void operationFinished(Object ctx, Void resultOfOperation) { new Thread(new Runnable() { @Override public void run() { if (logger.isDebugEnabled()) logger.debug("Operation finished!"); ConcurrencyUtils.put(queue, true); } }).start(); } @Override public void operationFailed(Object ctx, final PubSubException exception) { new Thread(new Runnable() { @Override public void run() { logger.error("Operation failed!", exception); ConcurrencyUtils.put(queue, false); } }).start(); } } // Test implementation of subscriber's message handler. static class TestMessageHandler implements MessageHandler { // For subscribe reconnect testing, the server could send us back // messages we've already processed and consumed. We need to keep // track of the ones we've encountered so we only signal back to the // consumeQueue once. private HashSet consumedMessages = new HashSet(); private long largestMsgSeqIdConsumed = -1; private final SynchronousQueue consumeQueue; public TestMessageHandler(SynchronousQueue consumeQueue) { this.consumeQueue = consumeQueue; } public void deliver(ByteString topic, ByteString subscriberId, final Message msg, Callback callback, Object context) { if (!consumedMessages.contains(msg.getMsgId())) { // New message to consume. Add it to the Set of consumed // messages. consumedMessages.add(msg.getMsgId()); // Check that the msg seq ID is incrementing by 1 compared to // the last consumed message. Don't do this check if this is the // initial message being consumed. if (largestMsgSeqIdConsumed >= 0 && msg.getMsgId().getLocalComponent() != largestMsgSeqIdConsumed + 1) { new Thread(new Runnable() { @Override public void run() { if (logger.isDebugEnabled()) logger.debug("Consuming message that is out of order for msgId: " + msg.getMsgId().getLocalComponent()); ConcurrencyUtils.put(consumeQueue, false); } }).start(); } else { new Thread(new Runnable() { @Override public void run() { if (logger.isDebugEnabled()) logger.debug("Consume operation finished successfully!"); ConcurrencyUtils.put(consumeQueue, true); } }).start(); } // Store the consumed message as the new last msg id consumed. largestMsgSeqIdConsumed = msg.getMsgId().getLocalComponent(); } else { if (logger.isDebugEnabled()) logger.debug("Consumed a message that we've processed already: " + msg); } callback.operationFinished(context, null); } } class TestClientConfiguration extends HubClientConfiguration { @Override public InetSocketAddress getDefaultServerHost() { if (mode == Mode.PROXY) { return new InetSocketAddress(proxyConf.getProxyPort()); } else { return super.getDefaultServerHost(); } } @Override public boolean isSSLEnabled() { if (mode == Mode.SSL) return true; else return false; } @Override public boolean isSubscriptionChannelSharingEnabled() { return isSubscriptionChannelSharingEnabled; } } // ClientConfiguration to use for this test. protected ClientConfiguration getClientConfiguration() { return new TestClientConfiguration(); } @Override @Before public void setUp() throws Exception { super.setUp(); if (mode == Mode.PROXY) { proxy = new HedwigProxy(proxyConf, new LoggingExceptionHandler()); proxy.start(); } client = new HedwigClient(getClientConfiguration()); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } @Override @After public void tearDown() throws Exception { client.close(); if (mode == Mode.PROXY) { proxy.shutdown(); } super.tearDown(); } // Helper function to generate Messages protected Message getMsg(int msgNum) { return Message.newBuilder().setBody(ByteString.copyFromUtf8("Message" + msgNum)).build(); } // Helper function to generate Topics protected ByteString getTopic(int topicNum) { return ByteString.copyFromUtf8("Topic" + topicNum); } protected void startDelivery(ByteString topic, ByteString subscriberId, MessageHandler handler) throws Exception { startDelivery(subscriber, topic, subscriberId, handler); } protected void startDelivery(Subscriber subscriber, ByteString topic, ByteString subscriberId, MessageHandler handler) throws Exception { subscriber.startDelivery(topic, subscriberId, handler); if (mode == Mode.PROXY) { WriteRecordingChannel channel = new WriteRecordingChannel(); PubSubRequest request = PubSubRequest.newBuilder().setProtocolVersion(ProtocolVersion.VERSION_ONE) .setTopic(topic).setTxnId(0).setType(OperationType.START_DELIVERY).setStartDeliveryRequest( StartDeliveryRequest.newBuilder().setSubscriberId(subscriberId)).build(); proxy.getStartDeliveryHandler().handleRequest(request, channel); assertEquals(StatusCode.SUCCESS, ((PubSubResponse) channel.getMessagesWritten().get(0)).getStatusCode()); } } protected void stopDelivery(ByteString topic, ByteString subscriberId) throws Exception { stopDelivery(subscriber, topic, subscriberId); } protected void stopDelivery(Subscriber subscriber, ByteString topic, ByteString subscriberId) throws Exception { subscriber.stopDelivery(topic, subscriberId); if (mode == Mode.PROXY) { PubSubRequest request = PubSubRequest.newBuilder().setProtocolVersion(ProtocolVersion.VERSION_ONE) .setTopic(topic).setTxnId(1).setType(OperationType.STOP_DELIVERY).setStopDeliveryRequest( StopDeliveryRequest.newBuilder().setSubscriberId(subscriberId)).build(); proxy.getStopDeliveryHandler().handleRequest(request, proxy.getChannelTracker().getChannel(topic, subscriberId)); } } protected void publishBatch(int batchSize, boolean expected, boolean messagesToBeConsumed, int loop) throws Exception { if (logger.isDebugEnabled()) logger.debug("Publishing " + loop + " batch of messages."); for (int i = 0; i < batchSize; i++) { publisher.asyncPublish(getTopic(i), getMsg(i + loop * batchSize), new TestCallback(queue), null); assertTrue(expected == queue.take()); if (messagesToBeConsumed) assertTrue(consumeQueue.take()); } } protected void subscribeToTopics(int batchSize) throws Exception { if (logger.isDebugEnabled()) logger.debug("Subscribing to topics and starting delivery."); for (int i = 0; i < batchSize; i++) { subscriber.asyncSubscribe(getTopic(i), localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); } // Start delivery for the subscriber for (int i = 0; i < batchSize; i++) { startDelivery(getTopic(i), localSubscriberId, new TestMessageHandler(consumeQueue)); } } protected void shutDownLastServer() { if (logger.isDebugEnabled()) logger.debug("Shutting down the last server in the Hedwig hub cluster."); serversList.get(serversList.size() - 1).shutdown(); // Due to a possible race condition, after we've shutdown the server, // the client could still be caching the channel connection to that // server. It is possible for a publish request to go to the shutdown // server using the closed/shutdown channel before the channel // disconnect logic kicks in. What could happen is that the publish // is done successfully on the channel but the server on the other end // can't/won't read it. This publish request will time out and the // Junit test will fail. Since that particular scenario is not what is // tested here, use a workaround of sleeping in this thread (so the // channel disconnect logic can complete) before we publish again. try { Thread.sleep(1000); } catch (InterruptedException e) { logger.error("Thread was interrupted while sleeping after shutting down last server!", e); } } // This tests out the manual sending of consume messages to the server // instead of relying on the automatic sending by the client lib for it. @Test(timeout=10000) public void testManualConsumeClient() throws Exception { HedwigClient myClient = new HedwigClient(new TestClientConfiguration() { @Override public boolean isAutoSendConsumeMessageEnabled() { return false; } }); Subscriber mySubscriber = myClient.getSubscriber(); Publisher myPublisher = myClient.getPublisher(); ByteString myTopic = getTopic(0); // Subscribe to a topic and start delivery on it mySubscriber.asyncSubscribe(myTopic, localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); startDelivery(mySubscriber, myTopic, localSubscriberId, new TestMessageHandler(consumeQueue)); // Publish some messages int batchSize = 10; for (int i = 0; i < batchSize; i++) { myPublisher.asyncPublish(myTopic, getMsg(i), new TestCallback(queue), null); assertTrue(queue.take()); assertTrue(consumeQueue.take()); } // Now manually send a consume message for each message received for (int i = 0; i < batchSize; i++) { boolean success = true; try { mySubscriber.consume(myTopic, localSubscriberId, MessageSeqId.newBuilder().setLocalComponent(i + 1) .build()); } catch (ClientNotSubscribedException e) { success = false; } assertTrue(success); } // Since the consume call eventually does an async write to the Netty // channel, the writing of the consume requests may not have completed // yet before we stop the client. Sleep a little before we stop the // client just so error messages are not logged. try { Thread.sleep(1000); } catch (InterruptedException e) { logger.error("Thread was interrupted while waiting to stop client for manual consume test!!", e); } myClient.close(); } @Test(timeout=10000) public void testAttachToSubscriptionSuccess() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); // Close the subscription asynchronously subscriber.asyncCloseSubscription(topic, localSubscriberId, new TestCallback(queue), null); assertTrue(queue.take()); // Now try to attach to the subscription subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); // Start delivery and publish some messages. Make sure they are consumed // correctly. startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); int batchSize = 5; for (int i = 0; i < batchSize; i++) { publisher.asyncPublish(topic, getMsg(i), new TestCallback(queue), null); assertTrue(queue.take()); assertTrue(consumeQueue.take()); } } @Test(timeout=10000) public void testServerRedirect() throws Exception { int batchSize = 10; publishBatch(batchSize, true, false, 0); } @Test(timeout=10000) public void testSubscribeAndConsume() throws Exception { int batchSize = 10; subscribeToTopics(batchSize); publishBatch(batchSize, true, true, 0); } @Test(timeout=10000) public void testServerFailoverPublishOnly() throws Exception { int batchSize = 10; publishBatch(batchSize, true, false, 0); shutDownLastServer(); publishBatch(batchSize, true, false, 1); } @Test(timeout=10000) public void testServerFailover() throws Exception { int batchSize = 10; subscribeToTopics(batchSize); publishBatch(batchSize, true, true, 0); shutDownLastServer(); publishBatch(batchSize, true, true, 1); } @Test(timeout=10000) public void testUnsubscribe() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); publisher.asyncPublish(topic, getMsg(0), new TestCallback(queue), null); assertTrue(queue.take()); assertTrue(consumeQueue.take()); // Send an Unsubscribe request subscriber.asyncUnsubscribe(topic, localSubscriberId, new TestCallback(queue), null); assertTrue(queue.take()); // Now publish a message and make sure it is not consumed by the client publisher.asyncPublish(topic, getMsg(1), new TestCallback(queue), null); assertTrue(queue.take()); // Wait a little bit just in case the message handler is still active, // consuming the message, and then putting a true value in the // consumeQueue. Thread.sleep(1000); // Put a False value on the consumeQueue so we can verify that it // is not blocked by a message consume action which already put a True // value into the queue. new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(consumeQueue, false); } }).start(); assertFalse(consumeQueue.take()); } @Test(timeout=10000) public void testSyncUnsubscribeWithoutSubscription() throws Exception { boolean unsubscribeSuccess = false; try { subscriber.unsubscribe(getTopic(0), localSubscriberId); } catch (ClientNotSubscribedException e) { unsubscribeSuccess = true; } catch (Exception ex) { unsubscribeSuccess = false; } assertTrue(unsubscribeSuccess); } @Test(timeout=10000) public void testAsyncUnsubscribeWithoutSubscription() throws Exception { subscriber.asyncUnsubscribe(getTopic(0), localSubscriberId, new TestCallback(queue), null); assertFalse(queue.take()); } @Test(timeout=10000) public void testCloseSubscription() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); publisher.asyncPublish(topic, getMsg(0), new TestCallback(queue), null); assertTrue(queue.take()); assertTrue(consumeQueue.take()); // Close the subscription asynchronously subscriber.asyncCloseSubscription(topic, localSubscriberId, new TestCallback(queue), null); assertTrue(queue.take()); // Now publish a message and make sure it is not consumed by the client publisher.asyncPublish(topic, getMsg(1), new TestCallback(queue), null); assertTrue(queue.take()); // Wait a little bit just in case the message handler is still active, // consuming the message, and then putting a true value in the // consumeQueue. Thread.sleep(1000); // Put a False value on the consumeQueue so we can verify that it // is not blocked by a message consume action which already put a True // value into the queue. new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(consumeQueue, false); } }).start(); assertFalse(consumeQueue.take()); } @Test(timeout=10000) public void testStartDeliveryTwice() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); try { startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); fail("Should not reach here!"); } catch (AlreadyStartDeliveryException e) { } } @Test(timeout=10000) public void testStopDelivery() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); publisher.asyncPublish(topic, getMsg(0), new TestCallback(queue), null); assertTrue(queue.take()); assertTrue(consumeQueue.take()); // Stop the delivery for this subscription stopDelivery(topic, localSubscriberId); // Publish some more messages so they are queued up to be delivered to // the client int batchSize = 10; for (int i = 0; i < batchSize; i++) { publisher.asyncPublish(topic, getMsg(i + 1), new TestCallback(queue), null); assertTrue(queue.take()); } // Wait a little bit just in case the message handler is still active, // consuming the message, and then putting a true value in the // consumeQueue. Thread.sleep(1000); // Put a False value on the consumeQueue so we can verify that it // is not blocked by a message consume action which already put a True // value into the queue. new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(consumeQueue, false); } }).start(); assertFalse(consumeQueue.take()); // Now start delivery again and verify that the queued up messages are // consumed startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); for (int i = 0; i < batchSize; i++) { assertTrue(consumeQueue.take()); } } @Test(timeout=10000) public void testConsumedMessagesInOrder() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); // Now publish some messages and verify that they are delivered in order // to the subscriber int batchSize = 100; for (int i = 0; i < batchSize; i++) { publisher.asyncPublish(topic, getMsg(i), new TestCallback(queue), null); } // We've sent out all of the publish messages asynchronously, // now verify that they are consumed in the correct order. for (int i = 0; i < batchSize; i++) { assertTrue(queue.take()); assertTrue(consumeQueue.take()); } } @Test(timeout=10000) public void testCreateSubscriptionFailure() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); // Close the subscription asynchronously subscriber.asyncCloseSubscription(topic, localSubscriberId, new TestCallback(queue), null); assertTrue(queue.take()); // Now try to create the subscription when it already exists subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE, new TestCallback(queue), null); assertFalse(queue.take()); } @Test(timeout=10000) public void testCreateSubscriptionSuccess() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.CREATE, new TestCallback(queue), null); assertTrue(queue.take()); startDelivery(topic, localSubscriberId, new TestMessageHandler(consumeQueue)); int batchSize = 5; for (int i = 0; i < batchSize; i++) { publisher.asyncPublish(topic, getMsg(i), new TestCallback(queue), null); assertTrue(queue.take()); assertTrue(consumeQueue.take()); } } @Test(timeout=10000) public void testAttachToSubscriptionFailure() throws Exception { ByteString topic = getTopic(0); subscriber.asyncSubscribe(topic, localSubscriberId, CreateOrAttach.ATTACH, new TestCallback(queue), null); assertFalse(queue.take()); } // The following 4 tests are to make sure that the subscriberId validation // works when it is a local subscriber and we're expecting the subscriberId // to be in the "local" specific format. @Test(timeout=10000) public void testSyncSubscribeWithInvalidSubscriberId() throws Exception { boolean subscribeSuccess = false; try { subscriber.subscribe(getTopic(0), hubSubscriberId, CreateOrAttach.CREATE_OR_ATTACH); } catch (InvalidSubscriberIdException e) { subscribeSuccess = true; } catch (Exception ex) { subscribeSuccess = false; } assertTrue(subscribeSuccess); } @Test(timeout=10000) public void testAsyncSubscribeWithInvalidSubscriberId() throws Exception { subscriber.asyncSubscribe(getTopic(0), hubSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertFalse(queue.take()); } @Test(timeout=10000) public void testSyncUnsubscribeWithInvalidSubscriberId() throws Exception { boolean unsubscribeSuccess = false; try { subscriber.unsubscribe(getTopic(0), hubSubscriberId); } catch (InvalidSubscriberIdException e) { unsubscribeSuccess = true; } catch (Exception ex) { unsubscribeSuccess = false; } assertTrue(unsubscribeSuccess); } @Test(timeout=10000) public void testAsyncUnsubscribeWithInvalidSubscriberId() throws Exception { subscriber.asyncUnsubscribe(getTopic(0), hubSubscriberId, new TestCallback(queue), null); assertFalse(queue.take()); } // The following 4 tests are to make sure that the subscriberId validation // also works when it is a hub subscriber and we're expecting the // subscriberId to be in the "hub" specific format. @Test(timeout=10000) public void testSyncHubSubscribeWithInvalidSubscriberId() throws Exception { Client hubClient = new HedwigHubClient(new HubClientConfiguration()); Subscriber hubSubscriber = hubClient.getSubscriber(); boolean subscribeSuccess = false; try { hubSubscriber.subscribe(getTopic(0), localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH); } catch (InvalidSubscriberIdException e) { subscribeSuccess = true; } catch (Exception ex) { subscribeSuccess = false; } assertTrue(subscribeSuccess); hubClient.close(); } @Test(timeout=10000) public void testAsyncHubSubscribeWithInvalidSubscriberId() throws Exception { Client hubClient = new HedwigHubClient(new HubClientConfiguration()); Subscriber hubSubscriber = hubClient.getSubscriber(); hubSubscriber.asyncSubscribe(getTopic(0), localSubscriberId, CreateOrAttach.CREATE_OR_ATTACH, new TestCallback( queue), null); assertFalse(queue.take()); hubClient.close(); } @Test(timeout=10000) public void testSyncHubUnsubscribeWithInvalidSubscriberId() throws Exception { Client hubClient = new HedwigHubClient(new HubClientConfiguration()); Subscriber hubSubscriber = hubClient.getSubscriber(); boolean unsubscribeSuccess = false; try { hubSubscriber.unsubscribe(getTopic(0), localSubscriberId); } catch (InvalidSubscriberIdException e) { unsubscribeSuccess = true; } catch (Exception ex) { unsubscribeSuccess = false; } assertTrue(unsubscribeSuccess); hubClient.close(); } @Test(timeout=10000) public void testAsyncHubUnsubscribeWithInvalidSubscriberId() throws Exception { Client hubClient = new HedwigHubClient(new HubClientConfiguration()); Subscriber hubSubscriber = hubClient.getSubscriber(); hubSubscriber.asyncUnsubscribe(getTopic(0), localSubscriberId, new TestCallback(queue), null); assertFalse(queue.take()); hubClient.close(); } @Test(timeout=10000) public void testPublishWithBookKeeperError() throws Exception { int batchSize = 10; publishBatch(batchSize, true, false, 0); // stop all bookie servers bktb.stopAllBookieServers(); // following publish would failed with NotEnoughBookies publishBatch(batchSize, false, false, 1); // start all bookie servers bktb.startAllBookieServers(); // following publish should succeed publishBatch(batchSize, true, false, 1); } } TestHedwigHubProxy.java000066400000000000000000000026041244507361200362010ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/integration/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.integration; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import java.util.Collection; import java.util.Arrays; @RunWith(Parameterized.class) public class TestHedwigHubProxy extends TestHedwigHub { @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { true }, { false } }); } public TestHedwigHubProxy(boolean isSubscriptionChannelSharingEnabled) { super(Mode.PROXY, isSubscriptionChannelSharingEnabled); } } TestHedwigHubRegular.java000066400000000000000000000026121244507361200364600ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/integration/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.integration; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import java.util.Collection; import java.util.Arrays; @RunWith(Parameterized.class) public class TestHedwigHubRegular extends TestHedwigHub { @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { true }, { false } }); } public TestHedwigHubRegular(boolean isSubscriptionChannelSharingEnabled) { super(Mode.REGULAR, isSubscriptionChannelSharingEnabled); } } TestHedwigHubSSL.java000066400000000000000000000025761244507361200355310ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/integration/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.integration; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import java.util.Collection; import java.util.Arrays; @RunWith(Parameterized.class) public class TestHedwigHubSSL extends TestHedwigHub { @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { true }, { false } }); } public TestHedwigHubSSL(boolean isSubscriptionChannelSharingEnabled) { super(Mode.SSL, isSubscriptionChannelSharingEnabled); } } TestHedwigRegion.java000066400000000000000000000314251244507361200356470ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/integration/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.integration; import java.util.Arrays; import java.util.Collection; import java.util.Map; import java.util.Random; import java.util.concurrent.SynchronousQueue; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import com.google.protobuf.ByteString; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.server.HedwigRegionTestBase; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.integration.TestHedwigHub.TestCallback; import org.apache.hedwig.server.integration.TestHedwigHub.TestMessageHandler; import org.apache.hedwig.util.HedwigSocketAddress; @RunWith(Parameterized.class) public class TestHedwigRegion extends HedwigRegionTestBase { // SynchronousQueues to verify async calls private final SynchronousQueue queue = new SynchronousQueue(); private final SynchronousQueue consumeQueue = new SynchronousQueue(); private static final int TEST_RETRY_REMOTE_SUBSCRIBE_INTERVAL_VALUE = 3000; protected class NewRegionServerConfiguration extends RegionServerConfiguration { public NewRegionServerConfiguration(int serverPort, int sslServerPort, String regionName) { super(serverPort, sslServerPort, regionName); } @Override public int getRetryRemoteSubscribeThreadRunInterval() { return TEST_RETRY_REMOTE_SUBSCRIBE_INTERVAL_VALUE; } } protected class NewRegionClientConfiguration extends ClientConfiguration { @Override public boolean isSubscriptionChannelSharingEnabled() { return isSubscriptionChannelSharingEnabled; } @Override public HedwigSocketAddress getDefaultServerHedwigSocketAddress() { return regionHubAddresses.get(0).get(0); } } protected ServerConfiguration getServerConfiguration(int serverPort, int sslServerPort, String regionName) { return new NewRegionServerConfiguration(serverPort, sslServerPort, regionName); } protected ClientConfiguration getRegionClientConfiguration() { return new NewRegionClientConfiguration(); } @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { false }, { true } }); } protected boolean isSubscriptionChannelSharingEnabled; public TestHedwigRegion(boolean isSubscriptionChannelSharingEnabled) { this.isSubscriptionChannelSharingEnabled = isSubscriptionChannelSharingEnabled; } @Override @Before public void setUp() throws Exception { numRegions = 3; numServersPerRegion = 4; super.setUp(); } @Override @After public void tearDown() throws Exception { super.tearDown(); } @Test(timeout=60000) public void testMultiRegionSubscribeAndConsume() throws Exception { int batchSize = 10; // Subscribe to topics for clients in all regions for (HedwigClient client : regionClientsMap.values()) { for (int i = 0; i < batchSize; i++) { client.getSubscriber().asyncSubscribe(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); } } // Start delivery for the local subscribers in all regions for (HedwigClient client : regionClientsMap.values()) { for (int i = 0; i < batchSize; i++) { client.getSubscriber().startDelivery(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), new TestMessageHandler(consumeQueue)); } } // Now start publishing messages for the subscribed topics in one of the // regions and verify that it gets delivered and consumed in all of the // other ones. Publisher publisher = regionClientsMap.values().iterator().next().getPublisher(); for (int i = 0; i < batchSize; i++) { publisher.asyncPublish(ByteString.copyFromUtf8("Topic" + i), Message.newBuilder().setBody( ByteString.copyFromUtf8("Message" + i)).build(), new TestCallback(queue), null); assertTrue(queue.take()); } // Make sure each region consumes the same set of published messages. for (int i = 0; i < regionClientsMap.size(); i++) { for (int j = 0; j < batchSize; j++) { assertTrue(consumeQueue.take()); } } } /** * Test region shuts down when first subscription. * * @throws Exception */ @Test(timeout=60000) public void testSubscribeAndConsumeWhenARegionDown() throws Exception { int batchSize = 10; // first shut down a region Random r = new Random(); int regionId = r.nextInt(numRegions); stopRegion(regionId); // subscribe to topics when a region shuts down for (HedwigClient client : regionClientsMap.values()) { for (int i = 0; i < batchSize; i++) { client.getSubscriber().asyncSubscribe(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertFalse(queue.take()); } } // start region gain startRegion(regionId); // sub it again for (Map.Entry entry : regionClientsMap.entrySet()) { HedwigClient client = entry.getValue(); for (int i = 0; i < batchSize; i++) { client.getSubscriber().asyncSubscribe(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); } } // Start delivery for local subscribers in all regions for (Map.Entry entry : regionClientsMap.entrySet()) { HedwigClient client = entry.getValue(); for (int i = 0; i < batchSize; i++) { client.getSubscriber().startDelivery(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), new TestMessageHandler(consumeQueue)); } } // Now start publishing messages for the subscribed topics in one of the // regions and verify that it gets delivered and consumed in all of the // other ones. int rid = r.nextInt(numRegions); String regionName = REGION_PREFIX + rid; Publisher publisher = regionClientsMap.get(regionName).getPublisher(); for (int i = 0; i < batchSize; i++) { publisher.asyncPublish(ByteString.copyFromUtf8("Topic" + i), Message.newBuilder().setBody( ByteString.copyFromUtf8(regionName + "-Message" + i)).build(), new TestCallback(queue), null); assertTrue(queue.take()); } // Make sure each region consumes the same set of published messages. for (int i = 0; i < regionClientsMap.size(); i++) { for (int j = 0; j < batchSize; j++) { assertTrue(consumeQueue.take()); } } } /** * Test region shuts down when attaching existing subscriptions. * * @throws Exception */ @Test(timeout=60000) public void testAttachExistingSubscriptionsWhenARegionDown() throws Exception { int batchSize = 10; // sub it remotely to make subscriptions existed for (Map.Entry entry : regionClientsMap.entrySet()) { HedwigClient client = entry.getValue(); for (int i = 0; i < batchSize; i++) { client.getSubscriber().asyncSubscribe(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); } } // stop regions for (int i=0; i entry : regionClientsMap.entrySet()) { HedwigClient client = entry.getValue(); for (int i = 0; i < batchSize; i++) { client.getSubscriber().startDelivery(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), new TestMessageHandler(consumeQueue)); } } // start region again startRegion(regionId); // wait for retry Thread.sleep(3 * TEST_RETRY_REMOTE_SUBSCRIBE_INTERVAL_VALUE); String regionName = REGION_PREFIX + regionId; HedwigClient client = regionClientsMap.get(regionName); for (int i = 0; i < batchSize; i++) { client.getSubscriber().asyncSubscribe(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), CreateOrAttach.CREATE_OR_ATTACH, new TestCallback(queue), null); assertTrue(queue.take()); client.getSubscriber().startDelivery(ByteString.copyFromUtf8("Topic" + i), ByteString.copyFromUtf8("LocalSubscriber"), new TestMessageHandler(consumeQueue)); } // Now start publishing messages for the subscribed topics in one of the // regions and verify that it gets delivered and consumed in all of the // other ones. Publisher publisher = client.getPublisher(); for (int i = 0; i < batchSize; i++) { publisher.asyncPublish(ByteString.copyFromUtf8("Topic" + i), Message.newBuilder().setBody( ByteString.copyFromUtf8(regionName + "-Message" + i)).build(), new TestCallback(queue), null); assertTrue(queue.take()); } // Make sure each region consumes the same set of published messages. for (int i = 0; i < regionClientsMap.size(); i++) { for (int j = 0; j < batchSize; j++) { assertTrue(consumeQueue.take()); } } } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/meta/000077500000000000000000000000001244507361200302465ustar00rootroot00000000000000MetadataManagerFactoryTestCase.java000066400000000000000000000052311244507361200370320ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.hedwig.server.meta; import java.util.Arrays; import java.util.Collection; import org.apache.bookkeeper.metastore.InMemoryMetaStore; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.meta.ZkMetadataManagerFactory; import org.apache.hedwig.util.Callback; import org.apache.hedwig.zookeeper.ZooKeeperTestBase; import org.junit.After; import org.junit.Before; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @RunWith(Parameterized.class) public abstract class MetadataManagerFactoryTestCase extends ZooKeeperTestBase { static Logger LOG = LoggerFactory.getLogger(MetadataManagerFactoryTestCase.class); protected MetadataManagerFactory metadataManagerFactory; protected ServerConfiguration conf; public MetadataManagerFactoryTestCase(String metadataManagerFactoryCls) { super(); conf = new ServerConfiguration(); conf.setMetadataManagerFactoryName(metadataManagerFactoryCls); conf.getConf().setProperty("metastore_impl_class", InMemoryMetaStore.class.getName()); InMemoryMetaStore.reset(); } @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { ZkMetadataManagerFactory.class.getName() }, { MsMetadataManagerFactory.class.getName() }, }); } @Before @Override public void setUp() throws Exception { super.setUp(); metadataManagerFactory = MetadataManagerFactory.newMetadataManagerFactory(conf, zk); } @After @Override public void tearDown() throws Exception { metadataManagerFactory.shutdown(); super.tearDown(); } } TestFactoryLayout.java000066400000000000000000000064311244507361200345030ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.hedwig.server.meta; import java.io.IOException; import org.apache.hedwig.protocol.PubSubProtocol.ManagerMeta; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.zookeeper.ZooKeeperTestBase; import org.apache.hedwig.zookeeper.ZkUtils; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs.Ids; import org.junit.Test; import org.junit.Assert; public class TestFactoryLayout extends ZooKeeperTestBase { @Test(timeout=60000) public void testFactoryLayout() throws Exception { ServerConfiguration conf = new ServerConfiguration(); conf.setMetadataManagerFactoryName( "org.apache.hedwig.server.meta.ZkMetadataManager"); FactoryLayout layout = FactoryLayout.readLayout(zk, conf); Assert.assertTrue("Layout should be null", layout == null); String testName = "foobar"; int testVersion = 0xdeadbeef; // use layout defined in configuration also create it in zookeeper writeFactoryLayout(conf, testName, testVersion); layout = FactoryLayout.readLayout(zk, conf); Assert.assertEquals(testName, layout.getManagerMeta().getManagerImpl()); Assert.assertEquals(testVersion, layout.getManagerMeta().getManagerVersion()); } private void writeFactoryLayout(ServerConfiguration conf, String managerCls, int managerVersion) throws Exception { ManagerMeta managerMeta = ManagerMeta.newBuilder() .setManagerImpl(managerCls) .setManagerVersion(managerVersion) .build(); FactoryLayout layout = new FactoryLayout(managerMeta); layout.store(zk, conf); } @Test(timeout=60000) public void testCorruptedFactoryLayout() throws Exception { ServerConfiguration conf = new ServerConfiguration(); StringBuilder msb = new StringBuilder(); String factoryLayoutPath = FactoryLayout.getFactoryLayoutPath(msb, conf); // write corrupted manager layout ZkUtils.createFullPathOptimistic(zk, factoryLayoutPath, "BadLayout".getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); try { FactoryLayout.readLayout(zk, conf); Assert.fail("Shouldn't reach here!"); } catch (IOException ie) { } } } TestMetadataManager.java000066400000000000000000000434101244507361200347070ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.hedwig.server.meta; import java.util.Map; import com.google.protobuf.ByteString; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.StubCallback; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRange; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionState; import org.apache.hedwig.server.topics.HubInfo; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.Either; import org.apache.hedwig.util.HedwigSocketAddress; import org.junit.Test; import org.junit.Assert; public class TestMetadataManager extends MetadataManagerFactoryTestCase { public TestMetadataManager(String metadataManagerFactoryCls) { super(metadataManagerFactoryCls); } @Test(timeout=60000) public void testOwnerInfo() throws Exception { TopicOwnershipManager toManager = metadataManagerFactory.newTopicOwnershipManager(); ByteString topic = ByteString.copyFromUtf8("testOwnerInfo"); StubCallback> readCallback = new StubCallback>(); StubCallback writeCallback = new StubCallback(); StubCallback deleteCallback = new StubCallback(); Either res; HubInfo owner = new HubInfo(new HedwigSocketAddress("127.0.0.1", 8008), 999); // Write non-existed owner info toManager.writeOwnerInfo(topic, owner, Version.NEW, writeCallback, null); res = writeCallback.queue.take(); Assert.assertEquals(null, res.right()); Version v1 = res.left(); // read owner info toManager.readOwnerInfo(topic, readCallback, null); Versioned hubInfo = readCallback.queue.take().left(); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v1.compare(hubInfo.getVersion())); Assert.assertEquals(owner, hubInfo.getValue()); HubInfo newOwner = new HubInfo(new HedwigSocketAddress("127.0.0.1", 8008), 1000); // write exsited owner info with null version toManager.writeOwnerInfo(topic, newOwner, Version.NEW, writeCallback, null); res = writeCallback.queue.take(); Assert.assertNotNull(res.right()); Assert.assertTrue(res.right() instanceof PubSubException.TopicOwnerInfoExistsException); // write existed owner info with right version toManager.writeOwnerInfo(topic, newOwner, v1, writeCallback, null); res = writeCallback.queue.take(); Assert.assertEquals(null, res.right()); Version v2 = res.left(); Assert.assertEquals(Version.Occurred.AFTER, v2.compare(v1)); // read owner info toManager.readOwnerInfo(topic, readCallback, null); hubInfo = readCallback.queue.take().left(); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v2.compare(hubInfo.getVersion())); Assert.assertEquals(newOwner, hubInfo.getValue()); HubInfo newOwner2 = new HubInfo(new HedwigSocketAddress("127.0.0.1", 8008), 1001); // write existed owner info with bad version toManager.writeOwnerInfo(topic, newOwner2, v1, writeCallback, null); res = writeCallback.queue.take(); Assert.assertNotNull(res.right()); Assert.assertTrue(res.right() instanceof PubSubException.BadVersionException); // read owner info toManager.readOwnerInfo(topic, readCallback, null); hubInfo = readCallback.queue.take().left(); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v2.compare(hubInfo.getVersion())); Assert.assertEquals(newOwner, hubInfo.getValue()); // delete existed owner info with bad version toManager.deleteOwnerInfo(topic, v1, deleteCallback, null); Assert.assertTrue(deleteCallback.queue.take().right() instanceof PubSubException.BadVersionException); // read owner info toManager.readOwnerInfo(topic, readCallback, null); hubInfo = readCallback.queue.take().left(); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v2.compare(hubInfo.getVersion())); // delete existed owner info with right version toManager.deleteOwnerInfo(topic, v2, deleteCallback, null); Assert.assertEquals(null, deleteCallback.queue.take().right()); // Empty owner info toManager.readOwnerInfo(topic, readCallback, null); Assert.assertEquals(null, readCallback.queue.take().left()); // delete non-existed owner info toManager.deleteOwnerInfo(topic, Version.ANY, deleteCallback, null); Assert.assertTrue(deleteCallback.queue.take().right() instanceof PubSubException.NoTopicOwnerInfoException); toManager.close(); } @Test(timeout=60000) public void testPersistenceInfo() throws Exception { TopicPersistenceManager tpManager = metadataManagerFactory.newTopicPersistenceManager(); ByteString topic = ByteString.copyFromUtf8("testPersistenceInfo"); StubCallback> readCallback = new StubCallback>(); StubCallback writeCallback = new StubCallback(); StubCallback deleteCallback = new StubCallback(); // Write non-existed persistence info tpManager.writeTopicPersistenceInfo(topic, LedgerRanges.getDefaultInstance(), Version.NEW, writeCallback, null); Either res = writeCallback.queue.take(); Assert.assertEquals(null, res.right()); Version v1 = res.left(); // read persistence info tpManager.readTopicPersistenceInfo(topic, readCallback, null); Versioned ranges = readCallback.queue.take().left(); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v1.compare(ranges.getVersion())); Assert.assertEquals(LedgerRanges.getDefaultInstance(), ranges.getValue()); LedgerRange lastRange = LedgerRange.newBuilder().setLedgerId(1).build(); LedgerRanges.Builder builder = LedgerRanges.newBuilder(); builder.addRanges(lastRange); LedgerRanges newRanges = builder.build(); // write existed persistence info with null version tpManager.writeTopicPersistenceInfo(topic, newRanges, Version.NEW, writeCallback, null); res = writeCallback.queue.take(); Assert.assertNotNull(res.right()); Assert.assertTrue(res.right() instanceof PubSubException.TopicPersistenceInfoExistsException); // write existed persistence info with right version tpManager.writeTopicPersistenceInfo(topic, newRanges, v1, writeCallback, null); res = writeCallback.queue.take(); Assert.assertEquals(null, res.right()); Version v2 = res.left(); Assert.assertEquals(Version.Occurred.AFTER, v2.compare(v1)); // read persistence info tpManager.readTopicPersistenceInfo(topic, readCallback, null); ranges = readCallback.queue.take().left(); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v2.compare(ranges.getVersion())); Assert.assertEquals(newRanges, ranges.getValue()); lastRange = LedgerRange.newBuilder().setLedgerId(2).build(); builder = LedgerRanges.newBuilder(); builder.addRanges(lastRange); LedgerRanges newRanges2 = builder.build(); // write existed persistence info with bad version tpManager.writeTopicPersistenceInfo(topic, newRanges2, v1, writeCallback, null); res = writeCallback.queue.take(); Assert.assertNotNull(res.right()); Assert.assertTrue(res.right() instanceof PubSubException.BadVersionException); // read persistence info tpManager.readTopicPersistenceInfo(topic, readCallback, null); ranges = readCallback.queue.take().left(); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v2.compare(ranges.getVersion())); Assert.assertEquals(newRanges, ranges.getValue()); // delete with bad version tpManager.deleteTopicPersistenceInfo(topic, v1, deleteCallback, null); Assert.assertTrue(deleteCallback.queue.take().right() instanceof PubSubException.BadVersionException); // read persistence info tpManager.readTopicPersistenceInfo(topic, readCallback, null); ranges = readCallback.queue.take().left(); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v2.compare(ranges.getVersion())); Assert.assertEquals(newRanges, ranges.getValue()); // delete existed persistence info with right version tpManager.deleteTopicPersistenceInfo(topic, v2, deleteCallback, null); Assert.assertEquals(null, deleteCallback.queue.take().right()); // read empty persistence info tpManager.readTopicPersistenceInfo(topic, readCallback, null); Assert.assertEquals(null, readCallback.queue.take().left()); // delete non-existed persistence info tpManager.deleteTopicPersistenceInfo(topic, Version.ANY, deleteCallback, null); Assert.assertTrue(deleteCallback.queue.take().right() instanceof PubSubException.NoTopicPersistenceInfoException); tpManager.close(); } @Test(timeout=60000) public void testSubscriptionData() throws Exception { SubscriptionDataManager subManager = metadataManagerFactory.newSubscriptionDataManager(); ByteString topic = ByteString.copyFromUtf8("testSubscriptionData"); ByteString subid = ByteString.copyFromUtf8("mysub"); final StubCallback callback = new StubCallback(); StubCallback> readCallback = new StubCallback>(); StubCallback>> subsCallback = new StubCallback>>(); subManager.readSubscriptionData(topic, subid, readCallback, null); Either, PubSubException> readRes = readCallback.queue.take(); Assert.assertEquals("Found inconsistent subscription state", null, readRes.left()); Assert.assertEquals("Should not fail with PubSubException", null, readRes.right()); // read non-existed subscription state subManager.readSubscriptions(topic, subsCallback, null); Either>, PubSubException> res = subsCallback.queue.take(); Assert.assertEquals("Found more than 0 subscribers", 0, res.left().size()); Assert.assertEquals("Should not fail with PubSubException", null, res.right()); // update non-existed subscription state if (subManager.isPartialUpdateSupported()) { subManager.updateSubscriptionData(topic, subid, SubscriptionData.getDefaultInstance(), Version.ANY, callback, null); } else { subManager.replaceSubscriptionData(topic, subid, SubscriptionData.getDefaultInstance(), Version.ANY, callback, null); } Assert.assertTrue("Should fail to update a non-existed subscriber with PubSubException", callback.queue.take().right() instanceof PubSubException.NoSubscriptionStateException); Callback voidCallback = new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { callback.operationFinished(ctx, null); } @Override public void operationFailed(Object ctx, PubSubException exception) { callback.operationFailed(ctx, exception); } }; // delete non-existed subscription state subManager.deleteSubscriptionData(topic, subid, Version.ANY, voidCallback, null); Assert.assertTrue("Should fail to delete a non-existed subscriber with PubSubException", callback.queue.take().right() instanceof PubSubException.NoSubscriptionStateException); long seqId = 10; MessageSeqId.Builder builder = MessageSeqId.newBuilder(); builder.setLocalComponent(seqId); MessageSeqId msgId = builder.build(); SubscriptionState.Builder stateBuilder = SubscriptionState.newBuilder(SubscriptionState.getDefaultInstance()).setMsgId(msgId); SubscriptionData data = SubscriptionData.newBuilder().setState(stateBuilder).build(); // create a subscription state subManager.createSubscriptionData(topic, subid, data, callback, null); Either cbResult = callback.queue.take(); Version v1 = cbResult.left(); Assert.assertEquals("Should not fail with PubSubException", null, cbResult.right()); // read subscriptions subManager.readSubscriptions(topic, subsCallback, null); res = subsCallback.queue.take(); Assert.assertEquals("Should find just 1 subscriber", 1, res.left().size()); Assert.assertEquals("Should not fail with PubSubException", null, res.right()); Versioned versionedSubData = res.left().get(subid); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v1.compare(versionedSubData.getVersion())); SubscriptionData imss = versionedSubData.getValue(); Assert.assertEquals("Found inconsistent subscription state", data, imss); Assert.assertEquals("Found inconsistent last consumed seq id", seqId, imss.getState().getMsgId().getLocalComponent()); // move consume seq id seqId = 99; builder = MessageSeqId.newBuilder(); builder.setLocalComponent(seqId); msgId = builder.build(); stateBuilder = SubscriptionState.newBuilder(data.getState()).setMsgId(msgId); data = SubscriptionData.newBuilder().setState(stateBuilder).build(); // update subscription state if (subManager.isPartialUpdateSupported()) { subManager.updateSubscriptionData(topic, subid, data, versionedSubData.getVersion(), callback, null); } else { subManager.replaceSubscriptionData(topic, subid, data, versionedSubData.getVersion(), callback, null); } cbResult = callback.queue.take(); Assert.assertEquals("Fail to update a subscription state", null, cbResult.right()); Version v2 = cbResult.left(); // read subscription state subManager.readSubscriptionData(topic, subid, readCallback, null); Assert.assertEquals("Found inconsistent subscription state", data, readCallback.queue.take().left().getValue()); // read subscriptions again subManager.readSubscriptions(topic, subsCallback, null); res = subsCallback.queue.take(); Assert.assertEquals("Should find just 1 subscriber", 1, res.left().size()); Assert.assertEquals("Should not fail with PubSubException", null, res.right()); versionedSubData = res.left().get(subid); Assert.assertEquals(Version.Occurred.CONCURRENTLY, v2.compare(versionedSubData.getVersion())); imss = res.left().get(subid).getValue(); Assert.assertEquals("Found inconsistent subscription state", data, imss); Assert.assertEquals("Found inconsistent last consumed seq id", seqId, imss.getState().getMsgId().getLocalComponent()); // update or replace subscription data with bad version if (subManager.isPartialUpdateSupported()) { subManager.updateSubscriptionData(topic, subid, data, v1, callback, null); } else { subManager.replaceSubscriptionData(topic, subid, data, v1, callback, null); } Assert.assertTrue(callback.queue.take().right() instanceof PubSubException.BadVersionException); // delete with bad version subManager.deleteSubscriptionData(topic, subid, v1, voidCallback, null); Assert.assertTrue(callback.queue.take().right() instanceof PubSubException.BadVersionException); subManager.deleteSubscriptionData(topic, subid, res.left().get(subid).getVersion(), voidCallback, null); Assert.assertEquals("Fail to delete an existed subscriber", null, callback.queue.take().right()); // read subscription states again subManager.readSubscriptions(topic, subsCallback, null); res = subsCallback.queue.take(); Assert.assertEquals("Found more than 0 subscribers", 0, res.left().size()); Assert.assertEquals("Should not fail with PubSubException", null, res.right()); subManager.close(); } } TestMetadataManagerFactory.java000066400000000000000000000242051244507361200362400ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/meta/* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ package org.apache.hedwig.server.meta; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs.Ids; import java.io.IOException; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.CountDownLatch; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.ManagerMeta; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.zookeeper.ZooKeeperTestBase; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.junit.Assert; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TestMetadataManagerFactory extends ZooKeeperTestBase { static Logger LOG = LoggerFactory.getLogger(TestMetadataManagerFactory.class); static class TestServerConfiguration extends ServerConfiguration { String hedwigPrefix = "/hedwig"; @Override public String getZkPrefix() { return hedwigPrefix; } public void setZkPrefix(String prefix) { this.hedwigPrefix = prefix; } } static class DummyMetadataManagerFactory extends MetadataManagerFactory { static int VERSION = 10; public int getCurrentVersion() { return VERSION; } public MetadataManagerFactory initialize(ServerConfiguration cfg, ZooKeeper zk, int version) throws IOException { if (version != VERSION) { throw new IOException("unmatched manager version"); } // do nothing return this; } public void shutdown() {} public Iterator getTopics() { return null; } public TopicPersistenceManager newTopicPersistenceManager() { return null; } public SubscriptionDataManager newSubscriptionDataManager() { return null; } public TopicOwnershipManager newTopicOwnershipManager() { return null; } public void format(ServerConfiguration cfg, ZooKeeper zk) throws IOException { // do nothing } } private void writeFactoryLayout(ServerConfiguration conf, String factoryCls, int factoryVersion) throws Exception { ManagerMeta meta = ManagerMeta.newBuilder() .setManagerImpl(factoryCls) .setManagerVersion(factoryVersion).build(); new FactoryLayout(meta).store(zk, conf); } /** * Test bad server configuration */ @Test(timeout=60000) public void testBadConf() throws Exception { TestServerConfiguration conf = new TestServerConfiguration(); String root0 = "/goodconf"; conf.setZkPrefix(root0); MetadataManagerFactory m = MetadataManagerFactory.newMetadataManagerFactory(conf, zk); Assert.assertTrue("MetadataManagerFactory is unexpected type", (m instanceof ZkMetadataManagerFactory)); // mismatching conf conf.setMetadataManagerFactoryName(DummyMetadataManagerFactory.class.getName()); try { MetadataManagerFactory.newMetadataManagerFactory(conf, zk); Assert.fail("Shouldn't reach here"); } catch (Exception e) { Assert.assertTrue("Invalid exception", e.getMessage().contains("does not match existing factory")); } // invalid metadata manager String root1 = "/badconf1"; conf.setZkPrefix(root1); conf.setMetadataManagerFactoryName("DoesNotExist"); try { MetadataManagerFactory.newMetadataManagerFactory(conf, zk); Assert.fail("Shouldn't reach here"); } catch (Exception e) { Assert.assertTrue("Invalid exception", e.getMessage().contains("Failed to get metadata manager factory class from configuration")); } } /** * Test bad zk configuration */ @Test(timeout=60000) public void testBadZkContents() throws Exception { TestServerConfiguration conf = new TestServerConfiguration(); // bad type in zookeeper String root0 = "/badzk0"; conf.setZkPrefix(root0); writeFactoryLayout(conf, "DoesNotExist", 0xdeadbeef); try { MetadataManagerFactory.newMetadataManagerFactory(conf, zk); Assert.fail("Shouldn't reach here"); } catch (Exception e) { Assert.assertTrue("Invalid exception", e.getMessage().contains("No class found to instantiate metadata manager factory")); } // bad version in zookeeper String root1 = "/badzk1"; conf.setZkPrefix(root1); writeFactoryLayout(conf, ZkMetadataManagerFactory.class.getName(), 0xdeadbeef); try { MetadataManagerFactory.newMetadataManagerFactory(conf, zk); Assert.fail("Shouldn't reach here"); } catch (Exception e) { Assert.assertTrue("Invalid exception", e.getMessage().contains("Incompatible ZkMetadataManagerFactory version")); } } private class CreateMMThread extends Thread { private boolean success = false; private final String factoryCls; private final String root; private final CyclicBarrier barrier; private ZooKeeper zkc; CreateMMThread(String root, String factoryCls, CyclicBarrier barrier) throws Exception { this.factoryCls = factoryCls; this.barrier = barrier; this.root = root; final CountDownLatch latch = new CountDownLatch(1); zkc = new ZooKeeper(hostPort, 10000, new Watcher() { public void process(WatchedEvent event) { latch.countDown(); } }); latch.await(); } public void run() { TestServerConfiguration conf = new TestServerConfiguration(); conf.setZkPrefix(root); conf.setMetadataManagerFactoryName(factoryCls); try { barrier.await(); MetadataManagerFactory.newMetadataManagerFactory(conf, zkc); success = true; } catch (Exception e) { LOG.error("Failed to create metadata manager factory", e); } } public boolean isSuccessful() { return success; } public void close() throws Exception { zkc.close(); } } // test concurrent @Test(timeout=60000) public void testConcurrent1() throws Exception { /// everyone creates the same int numThreads = 50; // bad version in zookeeper String root0 = "/lmroot0"; CyclicBarrier barrier = new CyclicBarrier(numThreads+1); List threads = new ArrayList(numThreads); for (int i = 0; i < numThreads; i++) { CreateMMThread t = new CreateMMThread(root0, ZkMetadataManagerFactory.class.getName(), barrier); t.start(); threads.add(t); } barrier.await(); boolean success = true; for (CreateMMThread t : threads) { t.join(); t.close(); success = t.isSuccessful() && success; } Assert.assertTrue("Not all metadata manager factories created", success); } @Test(timeout=60000) public void testConcurrent2() throws Exception { /// odd create different int numThreadsEach = 25; // bad version in zookeeper String root0 = "/lmroot0"; CyclicBarrier barrier = new CyclicBarrier(numThreadsEach*2+1); List threadsA = new ArrayList(numThreadsEach); for (int i = 0; i < numThreadsEach; i++) { CreateMMThread t = new CreateMMThread(root0, ZkMetadataManagerFactory.class.getName(), barrier); t.start(); threadsA.add(t); } List threadsB = new ArrayList(numThreadsEach); for (int i = 0; i < numThreadsEach; i++) { CreateMMThread t = new CreateMMThread(root0, DummyMetadataManagerFactory.class.getName(), barrier); t.start(); threadsB.add(t); } barrier.await(); int numSuccess = 0; int numFails = 0; for (CreateMMThread t : threadsA) { t.join(); t.close(); if (t.isSuccessful()) { numSuccess++; } else { numFails++; } } for (CreateMMThread t : threadsB) { t.join(); t.close(); if (t.isSuccessful()) { numSuccess++; } else { numFails++; } } Assert.assertEquals("Incorrect number of successes", numThreadsEach, numSuccess); Assert.assertEquals("Incorrect number of failures", numThreadsEach, numFails); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/netty/000077500000000000000000000000001244507361200304635ustar00rootroot00000000000000TestPubSubServer.java000066400000000000000000000237441244507361200345100ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import java.io.IOException; import java.lang.Thread.UncaughtExceptionHandler; import java.net.InetSocketAddress; import java.util.LinkedList; import java.util.List; import java.util.concurrent.Executors; import java.util.concurrent.SynchronousQueue; import org.apache.commons.configuration.ConfigurationException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooKeeper; import org.junit.Test; import org.apache.bookkeeper.test.PortManager; import com.google.protobuf.ByteString; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.server.PubSubServerStandAloneTestBase; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.topics.AbstractTopicManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.server.LoggingExceptionHandler; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.hedwig.zookeeper.SafeAsyncZKCallback; public class TestPubSubServer extends PubSubServerStandAloneTestBase { @Test(timeout=60000) public void testSecondServer() throws Exception { PubSubServer server1 = new PubSubServer(new StandAloneServerConfiguration() { @Override public int getServerPort() { return super.getServerPort() + 1; } }, new ClientConfiguration(), new LoggingExceptionHandler()); server1.start(); server1.shutdown(); } class RecordingUncaughtExceptionHandler implements Thread.UncaughtExceptionHandler { SynchronousQueue queue; public RecordingUncaughtExceptionHandler(SynchronousQueue queue) { this.queue = queue; } @Override public void uncaughtException(Thread t, Throwable e) { queue.add(e); } } private interface TopicManagerInstantiator { public TopicManager instantiateTopicManager() throws IOException; } PubSubServer startServer(final UncaughtExceptionHandler uncaughtExceptionHandler, final int port, final TopicManagerInstantiator instantiator) throws Exception { PubSubServer server = new PubSubServer(new StandAloneServerConfiguration() { @Override public int getServerPort() { return port; } }, new ClientConfiguration(), uncaughtExceptionHandler) { @Override protected TopicManager instantiateTopicManager() throws IOException { return instantiator.instantiateTopicManager(); } }; server.start(); return server; } public void runPublishRequest(final int port) throws Exception { Publisher publisher = new HedwigClient(new ClientConfiguration() { @Override public InetSocketAddress getDefaultServerHost() { return new InetSocketAddress("localhost", port); } }).getPublisher(); publisher.asyncPublish(ByteString.copyFromUtf8("blah"), Message.newBuilder().setBody( ByteString.copyFromUtf8("blah")).build(), new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { assertTrue(false); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { assertTrue(false); } }, null); } @Test(timeout=60000) public void testUncaughtExceptionInNettyThread() throws Exception { SynchronousQueue queue = new SynchronousQueue(); RecordingUncaughtExceptionHandler uncaughtExceptionHandler = new RecordingUncaughtExceptionHandler(queue); final int port = PortManager.nextFreePort(); PubSubServer server = startServer(uncaughtExceptionHandler, port, new TopicManagerInstantiator() { @Override public TopicManager instantiateTopicManager() throws IOException { return new AbstractTopicManager(new ServerConfiguration(), Executors.newSingleThreadScheduledExecutor()) { @Override protected void realGetOwner(ByteString topic, boolean shouldClaim, Callback cb, Object ctx) { throw new RuntimeException("this exception should be uncaught"); } @Override protected void postReleaseCleanup(ByteString topic, Callback cb, Object ctx) { } }; } }); runPublishRequest(port); assertEquals(RuntimeException.class, queue.take().getClass()); server.shutdown(); } @Test(timeout=60000) public void testUncaughtExceptionInZKThread() throws Exception { SynchronousQueue queue = new SynchronousQueue(); RecordingUncaughtExceptionHandler uncaughtExceptionHandler = new RecordingUncaughtExceptionHandler(queue); final int port = PortManager.nextFreePort(); final String hostPort = "127.0.0.1:" + PortManager.nextFreePort(); PubSubServer server = startServer(uncaughtExceptionHandler, port, new TopicManagerInstantiator() { @Override public TopicManager instantiateTopicManager() throws IOException { return new AbstractTopicManager(new ServerConfiguration(), Executors.newSingleThreadScheduledExecutor()) { @Override protected void realGetOwner(ByteString topic, boolean shouldClaim, Callback cb, Object ctx) { ZooKeeper zookeeper; try { zookeeper = new ZooKeeper(hostPort, 60000, new Watcher() { @Override public void process(WatchedEvent event) { // TODO Auto-generated method stub } }); } catch (IOException e) { throw new RuntimeException(e); } zookeeper.getData("/fake", false, new SafeAsyncZKCallback.DataCallback() { @Override public void safeProcessResult(int rc, String path, Object ctx, byte[] data, org.apache.zookeeper.data.Stat stat) { throw new RuntimeException("This should go to the uncaught exception handler"); } }, null); } @Override protected void postReleaseCleanup(ByteString topic, Callback cb, Object ctx) { } }; } }); runPublishRequest(port); assertEquals(RuntimeException.class, queue.take().getClass()); server.shutdown(); } @Test(timeout=60000) public void testInvalidServerConfiguration() throws Exception { boolean success = false; ServerConfiguration conf = new ServerConfiguration() { @Override public boolean isInterRegionSSLEnabled() { return conf.getBoolean(INTER_REGION_SSL_ENABLED, true); } @Override public List getRegions() { List regionsList = new LinkedList(); regionsList.add("regionHost1:4080:9876"); regionsList.add("regionHost2:4080"); regionsList.add("regionHost3:4080:9876"); return regionsList; } }; try { conf.validate(); } catch (ConfigurationException e) { logger.error("Invalid configuration: ", e); success = true; } assertTrue(success); } @Test(timeout=60000) public void testValidServerConfiguration() throws Exception { boolean success = true; ServerConfiguration conf = new ServerConfiguration() { @Override public boolean isInterRegionSSLEnabled() { return conf.getBoolean(INTER_REGION_SSL_ENABLED, true); } @Override public List getRegions() { List regionsList = new LinkedList(); regionsList.add("regionHost1:4080:9876"); regionsList.add("regionHost2:4080:2938"); regionsList.add("regionHost3:4080:9876"); return regionsList; } }; try { conf.validate(); } catch (ConfigurationException e) { logger.error("Invalid configuration: ", e); success = false; } assertTrue(success); } } TestServerStats.java000066400000000000000000000027701244507361200344020ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import static org.junit.Assert.assertEquals; import org.apache.hedwig.server.netty.ServerStats.OpStats; import org.junit.Test; /** Tests that Statistics updation in hedwig Server */ public class TestServerStats { /** * Tests that updatLatency should not fail with * ArrayIndexOutOfBoundException when latency time coming as negative. */ @Test(timeout=60000) public void testUpdateLatencyShouldNotFailWithAIOBEWithNegativeLatency() throws Exception { OpStats opStat = new OpStats(); opStat.updateLatency(-10); assertEquals("Should not update any latency metrics", 0, opStat.numSuccessOps); } } WriteRecordingChannel.java000066400000000000000000000106371244507361200354760ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/netty/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.netty; import java.net.InetSocketAddress; import java.net.SocketAddress; import java.util.LinkedList; import java.util.List; import org.jboss.netty.channel.Channel; import org.jboss.netty.channel.ChannelConfig; import org.jboss.netty.channel.ChannelFactory; import org.jboss.netty.channel.ChannelFuture; import org.jboss.netty.channel.ChannelPipeline; import org.jboss.netty.channel.DefaultChannelFuture; import org.jboss.netty.channel.SucceededChannelFuture; public class WriteRecordingChannel implements Channel { public boolean closed = false; ChannelFuture closingFuture = new DefaultChannelFuture(this, false); List messagesWritten = new LinkedList(); public List getMessagesWritten() { return messagesWritten; } public void clearMessages() { messagesWritten.clear(); } @Override public ChannelFuture bind(SocketAddress localAddress) { throw new RuntimeException("Not intended"); } @Override public ChannelFuture close() { closed = true; closingFuture.setSuccess(); return new SucceededChannelFuture(this); } @Override public ChannelFuture connect(SocketAddress remoteAddress) { throw new RuntimeException("Not intended"); } @Override public ChannelFuture disconnect() { return close(); } @Override public ChannelFuture getCloseFuture() { return closingFuture; } @Override public ChannelConfig getConfig() { throw new RuntimeException("Not intended"); } @Override public ChannelFactory getFactory() { throw new RuntimeException("Not intended"); } @Override public Integer getId() { throw new RuntimeException("Not intended"); } @Override public int getInterestOps() { throw new RuntimeException("Not intended"); } @Override public SocketAddress getLocalAddress() { return new InetSocketAddress("localhost", 1234); } @Override public Channel getParent() { throw new RuntimeException("Not intended"); } @Override public ChannelPipeline getPipeline() { throw new RuntimeException("Not intended"); } @Override public SocketAddress getRemoteAddress() { return new InetSocketAddress("www.yahoo.com", 80); } @Override public boolean isBound() { throw new RuntimeException("Not intended"); } @Override public boolean isConnected() { return closed == false; } @Override public boolean isOpen() { throw new RuntimeException("Not intended"); } @Override public boolean isReadable() { throw new RuntimeException("Not intended"); } @Override public boolean isWritable() { throw new RuntimeException("Not intended"); } @Override public ChannelFuture setInterestOps(int interestOps) { throw new RuntimeException("Not intended"); } @Override public ChannelFuture setReadable(boolean readable) { throw new RuntimeException("Not intended"); } @Override public ChannelFuture unbind() { throw new RuntimeException("Not intended"); } @Override public ChannelFuture write(Object message) { messagesWritten.add(message); return new SucceededChannelFuture(this); } @Override public ChannelFuture write(Object message, SocketAddress remoteAddress) { throw new RuntimeException("Not intended"); } @Override public int compareTo(Channel o) { throw new RuntimeException("Not intended"); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/000077500000000000000000000000001244507361200316445ustar00rootroot00000000000000BookKeeperTestBase.java000066400000000000000000000204401244507361200361110ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.net.InetAddress; import java.nio.ByteBuffer; import java.io.File; import java.io.IOException; import java.util.LinkedList; import java.util.List; import java.util.Random; import org.apache.bookkeeper.replication.ReplicationException.CompatibilityException; import org.apache.bookkeeper.replication.ReplicationException.UnavailableException; import org.apache.bookkeeper.test.PortManager; import org.apache.bookkeeper.bookie.Bookie; import org.apache.bookkeeper.bookie.BookieException; import org.apache.bookkeeper.conf.ClientConfiguration; import org.apache.bookkeeper.conf.ServerConfiguration; import org.apache.bookkeeper.client.BookKeeper; import org.apache.bookkeeper.proto.BookieServer; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.ZooDefs.Ids; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.apache.hedwig.util.FileUtils; import org.apache.hedwig.zookeeper.ZooKeeperTestBase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * This is a base class for any tests that require a BookKeeper client/server * setup. * */ public class BookKeeperTestBase extends ZooKeeperTestBase { private static Logger LOG = LoggerFactory.getLogger(BookKeeperTestBase.class); class TestBookie extends Bookie { final long readDelay; public TestBookie(ServerConfiguration conf, long readDelay) throws IOException, KeeperException, InterruptedException, BookieException { super(conf); this.readDelay = readDelay; } @Override public ByteBuffer readEntry(long ledgerId, long entryId) throws IOException, NoLedgerException { if (readDelay > 0) { try { Thread.sleep(readDelay); } catch (InterruptedException ie) { } } return super.readEntry(ledgerId, entryId); } } class TestBookieServer extends BookieServer { public TestBookieServer(ServerConfiguration conf) throws IOException, KeeperException, InterruptedException, BookieException, UnavailableException, CompatibilityException { super(conf); } protected Bookie newBookie(ServerConfiguration conf) throws IOException, KeeperException, InterruptedException, BookieException { return new TestBookie(conf, readDelay); } } // BookKeeper Server variables private List bookiesList; private List bkConfsList; // String constants used for creating the bookie server files. private static final String PREFIX = "bookie"; private static final String SUFFIX = "test"; // readDelay protected long readDelay; // Variable to decide how many bookie servers to set up. private final int numBookies; // BookKeeper client instance protected BookKeeper bk; protected ServerConfiguration baseConf = new ServerConfiguration(); protected ClientConfiguration baseClientConf = new ClientConfiguration(); // Constructor public BookKeeperTestBase(int numBookies) { this(numBookies, 0L); } public BookKeeperTestBase(int numBookies, long readDelay) { this.numBookies = numBookies; this.readDelay = readDelay; } public BookKeeperTestBase() { // By default, use 3 bookies. this(3); } // Getter for the ZooKeeper client instance that the parent class sets up. protected ZooKeeper getZooKeeperClient() { return zk; } // Give junit a fake test so that its happy @Test(timeout=60000) public void testNothing() throws Exception { } @Override @Before public void setUp() throws Exception { super.setUp(); // Initialize the zk client with values try { zk.create("/ledgers", new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); zk.create("/ledgers/available", new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); } catch (KeeperException e) { LOG.error("Error setting up", e); } catch (InterruptedException e) { LOG.error("Error setting up", e); } // Create Bookie Servers bookiesList = new LinkedList(); bkConfsList = new LinkedList(); for (int i = 0; i < numBookies; i++) { startUpNewBookieServer(); } // Create the BookKeeper client bk = new BookKeeper(hostPort); } public String getZkHostPort() { return hostPort; } @Override @After public void tearDown() throws Exception { // Shutdown all of the bookie servers for (BookieServer bs : bookiesList) { bs.shutdown(); } // Close the BookKeeper client bk.close(); super.tearDown(); } public void stopAllBookieServers() throws Exception { for (BookieServer bs : bookiesList) { bs.shutdown(); } bookiesList.clear(); } public void startAllBookieServers() throws Exception { for (ServerConfiguration conf : bkConfsList) { bookiesList.add(startBookie(conf)); } } public void suspendAllBookieServers() throws Exception { for (BookieServer bs : bookiesList) { bs.suspendProcessing(); } } public void resumeAllBookieServers() throws Exception { for (BookieServer bs : bookiesList) { bs.resumeProcessing(); } } public void tearDownOneBookieServer() throws Exception { Random r = new Random(); int bi = r.nextInt(bookiesList.size()); BookieServer bs = bookiesList.get(bi); bs.shutdown(); bookiesList.remove(bi); bkConfsList.remove(bi); } public void startUpNewBookieServer() throws Exception { int port = PortManager.nextFreePort(); File tmpDir = FileUtils.createTempDirectory( PREFIX + port, SUFFIX); ServerConfiguration conf = newServerConfiguration( port, hostPort, tmpDir, new File[] { tmpDir }); bookiesList.add(startBookie(conf)); bkConfsList.add(conf); } /** * Helper method to startup a bookie server using a configuration object * * @param conf * Server Configuration Object * */ private BookieServer startBookie(ServerConfiguration conf) throws Exception { BookieServer server = new TestBookieServer(conf); server.start(); int port = conf.getBookiePort(); while(zk.exists("/ledgers/available/" + InetAddress.getLocalHost().getHostAddress() + ":" + port, false) == null) { Thread.sleep(500); } return server; } protected ServerConfiguration newServerConfiguration(int port, String zkServers, File journalDir, File[] ledgerDirs) { ServerConfiguration conf = new ServerConfiguration(baseConf); conf.setAllowLoopback(true); conf.setBookiePort(port); conf.setZkServers(zkServers); conf.setJournalDirName(journalDir.getPath()); String[] ledgerDirNames = new String[ledgerDirs.length]; for (int i=0; i callback, Object context) { try { int value = Integer.valueOf(msg.getBody().toStringUtf8()); if (value == expected.get()) { expected.incrementAndGet(); } else { // error condition logger.error("Did not receive expected value, expected {}, got {}", expected.get(), value); expected.set(0); latch.countDown(); } if (expected.get() == X) { latch.countDown(); } callback.operationFinished(context, null); } catch (Exception e) { logger.error("Received bad message", e); latch.countDown();// will error on match } } }); assertTrue("Timed out waiting for messages Y is " + Y + " expected is currently " + expected.get(), latch.await(10, TimeUnit.SECONDS)); assertEquals("Should be expected message with " + X, X, expected.get()); sub.stopDelivery(topic, subid); sub.closeSubscription(topic, subid); } @Test(timeout=60000) public void testBasicBounding() throws Exception { Client client = new HedwigClient(new MessageBoundClientConfiguration(5)); Publisher pub = client.getPublisher(); Subscriber sub = client.getSubscriber(); ByteString topic = ByteString.copyFromUtf8("basicBoundingTopic"); ByteString subid = ByteString.copyFromUtf8("basicBoundingSubId"); sub.subscribe(topic, subid, CreateOrAttach.CREATE); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 1000, 5); client.close(); } @Test(timeout=60000) public void testMultipleSubscribers() throws Exception { ByteString topic = ByteString.copyFromUtf8("multiSubTopic"); Client client = new HedwigClient(new HubClientConfiguration()); Publisher pub = client.getPublisher(); Subscriber sub = client.getSubscriber(); SubscriptionOptions options5 = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE).setMessageBound(5).build(); SubscriptionOptions options20 = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE).setMessageBound(20).build(); SubscriptionOptions optionsUnbounded = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE).build(); ByteString subid5 = ByteString.copyFromUtf8("bound5SubId"); ByteString subid20 = ByteString.copyFromUtf8("bound20SubId"); ByteString subidUnbounded = ByteString.copyFromUtf8("noboundSubId"); sub.subscribe(topic, subid5, options5); sub.closeSubscription(topic, subid5); sendXExpectLastY(pub, sub, topic, subid5, 1000, 5); sub.subscribe(topic, subid20, options20); sub.closeSubscription(topic, subid20); sendXExpectLastY(pub, sub, topic, subid20, 1000, 20); sub.subscribe(topic, subidUnbounded, optionsUnbounded); sub.closeSubscription(topic, subidUnbounded); sendXExpectLastY(pub, sub, topic, subidUnbounded, 10000, 10000); sub.unsubscribe(topic, subidUnbounded); sendXExpectLastY(pub, sub, topic, subid20, 1000, 20); sub.unsubscribe(topic, subid20); sendXExpectLastY(pub, sub, topic, subid5, 1000, 5); sub.unsubscribe(topic, subid5); client.close(); } @Test(timeout=60000) public void testUpdateMessageBound() throws Exception { ByteString topic = ByteString.copyFromUtf8("UpdateMessageBound"); Client client = new HedwigClient(new HubClientConfiguration()); Publisher pub = client.getPublisher(); Subscriber sub = client.getSubscriber(); SubscriptionOptions options5 = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE_OR_ATTACH).setMessageBound(5).build(); SubscriptionOptions options20 = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE_OR_ATTACH).setMessageBound(20).build(); SubscriptionOptions options10 = SubscriptionOptions.newBuilder() .setCreateOrAttach(CreateOrAttach.CREATE_OR_ATTACH).setMessageBound(10).build(); ByteString subid = ByteString.copyFromUtf8("updateSubId"); sub.subscribe(topic, subid, options5); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 50, 5); // update bound to 20 sub.subscribe(topic, subid, options20); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 50, 20); // update bound to 10 sub.subscribe(topic, subid, options10); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 50, 10); // message bound is not provided, no update sub.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); sub.closeSubscription(topic, subid); sendXExpectLastY(pub, sub, topic, subid, 50, 10); client.close(); } @Test(timeout=60000) public void testLedgerGC() throws Exception { Client client = new HedwigClient(new MessageBoundClientConfiguration()); Publisher pub = client.getPublisher(); Subscriber sub = client.getSubscriber(); String ledgersPath = "/hedwig/standalone/topics/testGCTopic/ledgers"; ByteString topic = ByteString.copyFromUtf8("testGCTopic"); ByteString subid = ByteString.copyFromUtf8("testGCSubId"); sub.subscribe(topic, subid, CreateOrAttach.CREATE_OR_ATTACH); sub.closeSubscription(topic, subid); for (int i = 1; i <= 100; i++) { pub.publish(topic, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); } LedgerRanges r = LedgerRanges.parseFrom(bktb.getZooKeeperClient().getData(ledgersPath, false, null)); assertEquals("Should only have 1 ledger yet", 1, r.getRangesList().size()); long firstLedger = r.getRangesList().get(0).getLedgerId(); stopHubServers(); startHubServers(); pub.publish(topic, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(0xdeadbeef))).build()); r = LedgerRanges.parseFrom(bktb.getZooKeeperClient().getData(ledgersPath, false, null)); assertEquals("Should have 2 ledgers after restart", 2, r.getRangesList().size()); for (int i = 100; i <= 200; i++) { pub.publish(topic, Message.newBuilder().setBody( ByteString.copyFromUtf8(String.valueOf(i))).build()); } Thread.sleep(5000); // give GC a chance to happen r = LedgerRanges.parseFrom(bktb.getZooKeeperClient().getData(ledgersPath, false, null)); long secondLedger = r.getRangesList().get(0).getLedgerId(); assertEquals("Should only have 1 ledger after GC", 1, r.getRangesList().size()); // ensure original ledger doesn't exist String firstLedgerPath = String.format("/ledgers/L%010d", firstLedger); String secondLedgerPath = String.format("/ledgers/L%010d", secondLedger); assertNull("Ledger should not exist", bktb.getZooKeeperClient().exists(firstLedgerPath, false)); assertNotNull("Ledger should exist", bktb.getZooKeeperClient().exists(secondLedgerPath, false)); client.close(); } } StubPersistenceManager.java000066400000000000000000000122471244507361200370530ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException.ServiceDownException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protoextensions.MessageIdUtils; import org.apache.hedwig.server.persistence.ScanCallback.ReasonForFinish; public class StubPersistenceManager implements PersistenceManagerWithRangeScan { Map> messages = new HashMap>(); boolean failure = false; ServiceDownException exception = new ServiceDownException("Asked to fail"); public void deliveredUntil(ByteString topic, Long seqId) { // noop } public void consumedUntil(ByteString topic, Long seqId) { // noop } public void setMessageBound(ByteString topic, Integer bound) { // noop } public void clearMessageBound(ByteString topic) { // noop } public void consumeToBound(ByteString topic) { // noop } protected static class ArrayListMessageFactory implements Factory> { static ArrayListMessageFactory instance = new ArrayListMessageFactory(); public List newInstance() { return new ArrayList(); } } public MessageSeqId getCurrentSeqIdForTopic(ByteString topic) { long seqId = MapMethods.getAfterInsertingIfAbsent(messages, topic, ArrayListMessageFactory.instance).size(); return MessageSeqId.newBuilder().setLocalComponent(seqId).build(); } public long getSeqIdAfterSkipping(ByteString topic, long seqId, int skipAmount) { return seqId + skipAmount; } public void persistMessage(PersistRequest request) { if (failure) { request.getCallback().operationFailed(request.getCtx(), exception); return; } MapMethods.addToMultiMap(messages, request.getTopic(), request.getMessage(), ArrayListMessageFactory.instance); request.getCallback().operationFinished(request.getCtx(), MessageIdUtils.mergeLocalSeqId(request.getMessage(), (long) messages.get(request.getTopic()).size()).getMsgId()); } public void scanSingleMessage(ScanRequest request) { if (failure) { request.getCallback().scanFailed(request.getCtx(), exception); return; } long index = request.getStartSeqId() - 1; List messageList = messages.get(request.getTopic()); if (index >= messageList.size()) { request.getCallback().scanFinished(request.getCtx(), ReasonForFinish.NO_MORE_MESSAGES); return; } Message msg = messageList.get((int) index); Message toDeliver = MessageIdUtils.mergeLocalSeqId(msg, request.getStartSeqId()); request.getCallback().messageScanned(request.getCtx(), toDeliver); } public void scanMessages(RangeScanRequest request) { if (failure) { request.getCallback().scanFailed(request.getCtx(), exception); return; } long totalSize = 0; long startSeqId = request.getStartSeqId(); for (int i = 0; i < request.getMessageLimit(); i++) { List messageList = MapMethods.getAfterInsertingIfAbsent(messages, request.getTopic(), ArrayListMessageFactory.instance); if (startSeqId + i > messageList.size()) { request.getCallback().scanFinished(request.getCtx(), ReasonForFinish.NO_MORE_MESSAGES); return; } Message msg = messageList.get((int) startSeqId + i - 1); Message toDeliver = MessageIdUtils.mergeLocalSeqId(msg, startSeqId + i); request.getCallback().messageScanned(request.getCtx(), toDeliver); totalSize += toDeliver.getBody().size(); if (totalSize > request.getSizeLimit()) { request.getCallback().scanFinished(request.getCtx(), ReasonForFinish.SIZE_LIMIT_EXCEEDED); return; } } request.getCallback().scanFinished(request.getCtx(), ReasonForFinish.NUM_MESSAGES_LIMIT_EXCEEDED); } @Override public void stop() { // do nothing } } StubScanCallback.java000066400000000000000000000034621244507361200355740ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.util.concurrent.LinkedBlockingQueue; import com.google.protobuf.ByteString; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Either; public class StubScanCallback implements ScanCallback { public static Message END_MESSAGE = Message.newBuilder().setBody(ByteString.EMPTY).build(); LinkedBlockingQueue> queue = new LinkedBlockingQueue>(); @Override public void messageScanned(Object ctx, Message message) { ConcurrencyUtils.put(queue, Either.of(message, (Exception) null)); } @Override public void scanFailed(Object ctx, Exception exception) { ConcurrencyUtils.put(queue, Either.of((Message) null, exception)); } @Override public void scanFinished(Object ctx, ReasonForFinish reason) { ConcurrencyUtils.put(queue, Either.of(END_MESSAGE, (Exception) null)); } } TestBookKeeperPersistenceManager.java000066400000000000000000001004541244507361200410220ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.LinkedList; import java.util.List; import java.util.Iterator; import java.util.Map; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.Semaphore; import java.util.concurrent.TimeUnit; import junit.framework.TestCase; import org.apache.bookkeeper.versioning.Version; import org.apache.bookkeeper.versioning.Versioned; import org.apache.hedwig.HelperMethods; import org.apache.hedwig.StubCallback; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRange; import org.apache.hedwig.protocol.PubSubProtocol.LedgerRanges; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.meta.SubscriptionDataManager; import org.apache.hedwig.server.meta.TopicOwnershipManager; import org.apache.hedwig.server.meta.TopicPersistenceManager; import org.apache.hedwig.server.subscriptions.MMSubscriptionManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.server.topics.TrivialOwnAllTopicManager; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Either; import org.apache.zookeeper.ZooKeeper; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; @RunWith(Parameterized.class) public class TestBookKeeperPersistenceManager extends TestCase { static Logger logger = LoggerFactory.getLogger(TestPersistenceManagerBlackBox.class); BookKeeperTestBase bktb; private final int numBookies = 3; private final long readDelay = 2000L; private final int maxEntriesPerLedger = 10; ServerConfiguration conf; ScheduledExecutorService scheduler; TopicManager tm; BookkeeperPersistenceManager manager; PubSubException failureException = null; TestMetadataManagerFactory metadataManagerFactory; TopicPersistenceManager tpManager; MMSubscriptionManager sm; boolean removeStartSeqId; static class TestMetadataManagerFactory extends MetadataManagerFactory { final MetadataManagerFactory factory; int serviceDownCount = 0; TestMetadataManagerFactory(ServerConfiguration conf, ZooKeeper zk) throws Exception { factory = MetadataManagerFactory.newMetadataManagerFactory(conf, zk); } public void setServiceDownCount(int count) { this.serviceDownCount = count; } @Override public int getCurrentVersion() { return factory.getCurrentVersion(); } @Override protected MetadataManagerFactory initialize( ServerConfiguration cfg, ZooKeeper zk, int version) throws IOException { // do nothing return factory; } @Override public void shutdown() throws IOException { factory.shutdown(); } @Override public Iterator getTopics() throws IOException { return factory.getTopics(); } @Override public TopicPersistenceManager newTopicPersistenceManager() { final TopicPersistenceManager manager = factory.newTopicPersistenceManager(); return new TopicPersistenceManager() { @Override public void close() throws IOException { manager.close(); } @Override public void readTopicPersistenceInfo(ByteString topic, Callback> callback, Object ctx) { if (serviceDownCount > 0) { --serviceDownCount; callback.operationFailed(ctx, new PubSubException.ServiceDownException("Metadata Store is down")); return; } manager.readTopicPersistenceInfo(topic, callback, ctx); } @Override public void writeTopicPersistenceInfo(ByteString topic, LedgerRanges ranges, Version version, Callback callback, Object ctx) { if (serviceDownCount > 0) { --serviceDownCount; callback.operationFailed(ctx, new PubSubException.ServiceDownException("Metadata Store is down")); return; } manager.writeTopicPersistenceInfo(topic, ranges, version, callback, ctx); } @Override public void deleteTopicPersistenceInfo(ByteString topic, Version version, Callback callback, Object ctx) { if (serviceDownCount > 0) { --serviceDownCount; callback.operationFailed(ctx, new PubSubException.ServiceDownException("Metadata Store is down")); return; } manager.deleteTopicPersistenceInfo(topic, version, callback, ctx); } }; } @Override public SubscriptionDataManager newSubscriptionDataManager() { final SubscriptionDataManager sdm = factory.newSubscriptionDataManager(); return new SubscriptionDataManager() { @Override public void close() throws IOException { sdm.close(); } @Override public void createSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData data, Callback callback, Object ctx) { sdm.createSubscriptionData(topic, subscriberId, data, callback, ctx); } @Override public boolean isPartialUpdateSupported() { return sdm.isPartialUpdateSupported(); } @Override public void updateSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData dataToUpdate, Version version, Callback callback, Object ctx) { if (serviceDownCount > 0) { --serviceDownCount; callback.operationFailed(ctx, new PubSubException.ServiceDownException("Metadata Store is down")); return; } sdm.updateSubscriptionData(topic, subscriberId, dataToUpdate, version, callback, ctx); } @Override public void replaceSubscriptionData(ByteString topic, ByteString subscriberId, SubscriptionData dataToReplace, Version version, Callback callback, Object ctx) { if (serviceDownCount > 0) { --serviceDownCount; callback.operationFailed(ctx, new PubSubException.ServiceDownException("Metadata Store is down")); return; } sdm.replaceSubscriptionData(topic, subscriberId, dataToReplace, version, callback, ctx); } @Override public void deleteSubscriptionData(ByteString topic, ByteString subscriberId, Version version, Callback callback, Object ctx) { sdm.deleteSubscriptionData(topic, subscriberId, version, callback, ctx); } @Override public void readSubscriptionData(ByteString topic, ByteString subscriberId, Callback> callback, Object ctx) { sdm.readSubscriptionData(topic, subscriberId, callback, ctx); } @Override public void readSubscriptions(ByteString topic, Callback>> cb, Object ctx) { sdm.readSubscriptions(topic, cb, ctx); } }; } @Override public TopicOwnershipManager newTopicOwnershipManager() { return factory.newTopicOwnershipManager(); } @Override public void format(ServerConfiguration cfg, ZooKeeper zk) throws IOException { factory.format(cfg, zk); } } public TestBookKeeperPersistenceManager(boolean removeStartSeqId) { this.removeStartSeqId = removeStartSeqId; } @Parameters public static Collection configs() { return Arrays.asList(new Object[][] { { true }, { false } }); } @SuppressWarnings("deprecation") private void startCluster(long delay) throws Exception { bktb = new BookKeeperTestBase(numBookies, 0L); bktb.setUp(); conf = new ServerConfiguration() { @Override public int getMessagesConsumedThreadRunInterval() { return 2000; } @Override public int getConsumeInterval() { return 0; } @Override public long getMaxEntriesPerLedger() { return maxEntriesPerLedger; } }; org.apache.bookkeeper.conf.ClientConfiguration bkClientConf = new org.apache.bookkeeper.conf.ClientConfiguration(); bkClientConf.setNumWorkerThreads(1).setReadTimeout(9999) .setThrottleValue(3); conf.addConf(bkClientConf); metadataManagerFactory = new TestMetadataManagerFactory(conf, bktb.getZooKeeperClient()); tpManager = metadataManagerFactory.newTopicPersistenceManager(); scheduler = Executors.newScheduledThreadPool(1); tm = new TrivialOwnAllTopicManager(conf, scheduler); manager = new BookkeeperPersistenceManager(bktb.bk, metadataManagerFactory, tm, conf, scheduler); sm = new MMSubscriptionManager(conf, metadataManagerFactory, tm, manager, null, scheduler); } private void stopCluster() throws Exception { tm.stop(); manager.stop(); sm.stop(); tpManager.close(); metadataManagerFactory.shutdown(); scheduler.shutdown(); bktb.tearDown(); } @Override @Before public void setUp() throws Exception { super.setUp(); startCluster(0L); } @Override @After public void tearDown() throws Exception { stopCluster(); super.tearDown(); } class RangeScanVerifier implements ScanCallback { LinkedList pubMsgs; boolean runNextScan = false; RangeScanRequest nextScan = null; public RangeScanVerifier(LinkedList pubMsgs, RangeScanRequest nextScan) { this.pubMsgs = pubMsgs; this.nextScan = nextScan; } @Override public void messageScanned(Object ctx, Message recvMessage) { logger.info("Scanned message : {}", recvMessage.getMsgId().getLocalComponent()); if (null != nextScan && !runNextScan) { runNextScan = true; manager.scanMessages(nextScan); } if (pubMsgs.size() == 0) { return; } Message pubMsg = pubMsgs.removeFirst(); if (!HelperMethods.areEqual(recvMessage, pubMsg)) { fail("Scanned message not equal to expected"); } } @Override public void scanFailed(Object ctx, Exception exception) { fail("Failed to scan messages."); } @Override @SuppressWarnings("unchecked") public void scanFinished(Object ctx, ReasonForFinish reason) { LinkedBlockingQueue statusQueue = (LinkedBlockingQueue) ctx; try { statusQueue.put(pubMsgs.isEmpty()); } catch (InterruptedException e) { throw new RuntimeException(e); } } } private LinkedList subMessages(List msgs, int start, int end) { LinkedList result = new LinkedList(); for (int i=start; i<=end; i++) { result.add(msgs.get(i)); } return result; } @Test(timeout=60000) public void testScanMessagesOnClosedLedgerAfterDeleteLedger() throws Exception { scanMessagesAfterDeleteLedgerTest(2); } @Test(timeout=60000) public void testScanMessagesOnUnclosedLedgerAfterDeleteLedger() throws Exception { scanMessagesAfterDeleteLedgerTest(1); } private void scanMessagesAfterDeleteLedgerTest(int numLedgers) throws Exception { ByteString topic = ByteString.copyFromUtf8("TestScanMessagesAfterDeleteLedger"); List msgs = new ArrayList(); acquireTopic(topic); msgs.addAll(publishMessages(topic, 2)); for (int i=0; i statusQueue = new LinkedBlockingQueue(); manager.scanMessages(new RangeScanRequest(topic, 3, 2, Long.MAX_VALUE, new RangeScanVerifier(subMessages(msgs, 2, 3), null), statusQueue)); Boolean b = statusQueue.poll(10 * readDelay, TimeUnit.MILLISECONDS); assertTrue("Should succeed to scan messages after deleted consumed ledger.", b); } @Test(timeout=60000) public void testScanMessagesOnEmptyLedgerAfterDeleteLedger() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestScanMessagesOnEmptyLedgerAfterDeleteLedger"); List msgs = new ArrayList(); acquireTopic(topic); msgs.addAll(publishMessages(topic, 2)); releaseTopic(topic); // acquire topic again to force a new ledger acquireTopic(topic); logger.info("Consumed messages."); consumedUntil(topic, 2L); // Wait until ledger ranges is updated. Thread.sleep(2000L); logger.info("Released topic with an empty ledger."); // release topic to force an empty ledger releaseTopic(topic); // publish 2 more messages, these message expected to be id 3 and 4 acquireTopic(topic); logger.info("Published more messages."); msgs.addAll(publishMessages(topic, 2)); releaseTopic(topic); // acquire topic again acquireTopic(topic); // scan messages starting from 3 LinkedBlockingQueue statusQueue = new LinkedBlockingQueue(); long startSeqId = removeStartSeqId ? 1 : 3; manager.scanMessages(new RangeScanRequest(topic, startSeqId, 2, Long.MAX_VALUE, new RangeScanVerifier(subMessages(msgs, 2, 3), null), statusQueue)); Boolean b = statusQueue.poll(10 * readDelay, TimeUnit.MILLISECONDS); assertTrue("Should succeed to scan messages after deleted consumed ledger.", b); } @Test(timeout=60000) public void testFailedToDeleteLedger1() throws Exception { failedToDeleteLedgersTest(1); } @Test(timeout=60000) public void testFailedToDeleteLedger2() throws Exception { // succeed to delete second ledger failedToDeleteLedgersTest(2); } private void failedToDeleteLedgersTest(int numLedgers) throws Exception { final ByteString topic = ByteString.copyFromUtf8("TestFailedToDeleteLedger"); final int serviceDownCount = 1; List msgs = new ArrayList(); for (int i=0; i statusQueue = new LinkedBlockingQueue(); manager.scanMessages(new RangeScanRequest(topic, numLedgers * 2 + 1, 2, Long.MAX_VALUE, new RangeScanVerifier(subMessages(msgs, numLedgers * 2, numLedgers * 2 + 1), null), statusQueue)); Boolean b = statusQueue.poll(10 * readDelay, TimeUnit.MILLISECONDS); assertTrue("Should succeed to scan messages after deleted consumed ledger.", b); // consumed consumedUntil(topic, (numLedgers + 1) * 2L); // Wait until ledger ranges is updated. Thread.sleep(2000L); Semaphore latch = new Semaphore(1); latch.acquire(); tpManager.readTopicPersistenceInfo(topic, new Callback>() { @Override public void operationFinished(Object ctx, Versioned ranges) { if (null == ranges || ranges.getValue().getRangesList().size() > 1) { failureException = new PubSubException.NoTopicPersistenceInfoException("Invalid persistence info found for topic " + topic.toStringUtf8()); ((Semaphore)ctx).release(); return; } failureException = null; ((Semaphore)ctx).release(); } @Override public void operationFailed(Object ctx, PubSubException exception) { failureException = exception; ((Semaphore)ctx).release(); } }, latch); latch.acquire(); latch.release(); assertNull("Should not fail with exception.", failureException); } @Test(timeout=60000) public void testScanMessagesOnTwoLedgers() throws Exception { stopCluster(); startCluster(readDelay); ByteString topic = ByteString.copyFromUtf8("TestScanMessagesOnTwoLedgers"); List msgs = new ArrayList(); acquireTopic(topic); msgs.addAll(publishMessages(topic, 1)); releaseTopic(topic); // acquire topic again to force a new ledger acquireTopic(topic); msgs.addAll(publishMessages(topic, 3)); // scan messages LinkedBlockingQueue statusQueue = new LinkedBlockingQueue(); RangeScanRequest nextScan = new RangeScanRequest(topic, 3, 2, Long.MAX_VALUE, new RangeScanVerifier(subMessages(msgs, 2, 3), null), statusQueue); manager.scanMessages(new RangeScanRequest(topic, 1, 2, Long.MAX_VALUE, new RangeScanVerifier(subMessages(msgs, 0, 1), nextScan), statusQueue)); Boolean b = statusQueue.poll(10 * readDelay, TimeUnit.MILLISECONDS); if (b == null) { fail("One scan request doesn't finish"); } b = statusQueue.poll(10 * readDelay, TimeUnit.MILLISECONDS); if (b == null) { fail("One scan request doesn't finish"); } } @Test(timeout=60000) public void testInconsistentSubscriptionStateAndLedgerRanges1() throws Exception { // See the comment of inconsistentSubscriptionStateAndLedgerRanges. // For this case, Step (2) failed to update subscription state metadata, // but LedgerRanges is updated success. // Result: scan messages from 1 to 4 take place on ledger L2. inconsistentSubscriptionStateAndLedgerRanges(1); } @Test(timeout=60000) public void testInconsistentSubscriptionStateAndLedgerRanges2() throws Exception { // See the comment of inconsistentSubscriptionStateAndLedgerRanges. // For this case, step (2) failed to update subscription state metadata, // step (3) successfully delete L1 but failed to update LedgerRanges. // Result: scan messages from 1 to 4 falls in L1 and L2, // but BookKeeper may complain L1 not found. inconsistentSubscriptionStateAndLedgerRanges(2); } /** * Since InMemorySubscriptionState and LedgerRanges is maintained * separately, there may exist such inconsistent state: * (1). Topic ledgers: L1 [1 ~ 2], L2 [3 ~ ] * (2). Subscriber consumes to 2 and InMemorySubscriptionState is updated * successfully but failed when updating subscription state metadata * (3). AbstractSubscriptionManager#MessagesConsumedTask use * InMemorySubscriptionState to do garbage collection * and L1 is delete * (4). If Hub restarts at this time, old subscription state is read and * Hub will try to deliver message from 1 */ public void inconsistentSubscriptionStateAndLedgerRanges(int failedCount) throws Exception { final ByteString topic = ByteString.copyFromUtf8("inconsistentSubscriptionStateAndLedgerRanges"); final ByteString subscriberId = ByteString.copyFromUtf8("subId"); LinkedList msgs = new LinkedList(); // make ledger L1 [1 ~ 2] acquireTopic(topic); msgs.addAll(publishMessages(topic, 2)); releaseTopic(topic); // acquire topic again to force a new ledger L2 [3 ~ ] acquireTopic(topic); msgs.addAll(publishMessages(topic, 2)); StubCallback voidCb = new StubCallback(); StubCallback subDataCb = new StubCallback(); Either voidResult; Either subDataResult; // prepare for subscription sm.acquiredTopic(topic, voidCb, null); voidResult = ConcurrencyUtils.take(voidCb.queue); assertNull(voidResult.right()); // no exception // Do subscription SubscribeRequest subRequest = SubscribeRequest.newBuilder().setSubscriberId(subscriberId) .setCreateOrAttach(CreateOrAttach.CREATE_OR_ATTACH).build(); sm.serveSubscribeRequest(topic, subRequest, MessageSeqId.newBuilder().setLocalComponent(0).build(), subDataCb, null); subDataResult = ConcurrencyUtils.take(subDataCb.queue); assertNotNull(subDataResult.left()); // serveSubscribeRequest success // and return a SubscriptionData // object assertNull(subDataResult.right()); // no exception // simulate inconsistent situation between InMemorySubscriptionState and // LedgerRanges metadataManagerFactory.setServiceDownCount(failedCount); sm.setConsumeSeqIdForSubscriber(topic, subscriberId, MessageSeqId.newBuilder().setLocalComponent(2).build(), voidCb, null); voidResult = ConcurrencyUtils.take(voidCb.queue); assertNotNull(voidResult.right()); // update subscription state failed // and expect a exception // wait AbstractSubscriptionManager#MessagesConsumedTask to garbage // collect ledger L1 Thread.sleep(conf.getMessagesConsumedThreadRunInterval() * 2); // simulate hub restart: read old subscription state metadata and deliver // messages from 1 LinkedBlockingQueue statusQueue = new LinkedBlockingQueue(); RangeScanRequest scan = new RangeScanRequest(topic, 1, 4, Long.MAX_VALUE, new RangeScanVerifier(msgs, null), statusQueue); manager.scanMessages(scan); Boolean b = statusQueue.poll(10 * readDelay, TimeUnit.MILLISECONDS); if (b == null) { fail("Scan request doesn't finish"); } } @Test(timeout=60000) // Add this test case for BOOKKEEPER-458 public void testReadWhenTopicChangeLedger() throws Exception { final ByteString topic = ByteString.copyFromUtf8("testReadWhenTopicChangeLedger"); LinkedList msgs = new LinkedList(); // Write maxEntriesPerLedger entries to make topic change ledger acquireTopic(topic); msgs.addAll(publishMessages(topic, maxEntriesPerLedger)); // Notice, change ledger operation is asynchronous, so we should wait!!! Thread.sleep(2000); // Issue a scan request right start from the new ledger LinkedBlockingQueue statusQueue = new LinkedBlockingQueue(); RangeScanRequest scan = new RangeScanRequest(topic, maxEntriesPerLedger + 1, 1, Long.MAX_VALUE, new RangeScanVerifier(msgs, null), statusQueue); manager.scanMessages(scan); Boolean b = statusQueue.poll(10 * readDelay, TimeUnit.MILLISECONDS); if (b == null) { fail("Scan request timeout"); } assertFalse("Expect none message is scanned on the new created ledger", b); } class TestCallback implements Callback { @Override @SuppressWarnings("unchecked") public void operationFailed(Object ctx, PubSubException exception) { LinkedBlockingQueue statusQueue = (LinkedBlockingQueue) ctx; try { statusQueue.put(false); } catch (InterruptedException e) { throw new RuntimeException(e); } } @Override @SuppressWarnings("unchecked") public void operationFinished(Object ctx, PubSubProtocol.MessageSeqId resultOfOperation) { LinkedBlockingQueue statusQueue = (LinkedBlockingQueue) ctx; try { statusQueue.put(true); } catch (InterruptedException e) { throw new RuntimeException(e); } } } protected List publishMessages(ByteString topic, int numMsgs) throws Exception { List msgs = HelperMethods.getRandomPublishedMessages(numMsgs, 1024); LinkedBlockingQueue statusQueue = new LinkedBlockingQueue(); for (Message msg : msgs) { try { manager.persistMessage(new PersistRequest(topic, msg, new TestCallback(), statusQueue)); // wait a maximum of a minute Boolean b = statusQueue.poll(60, TimeUnit.SECONDS); if (b == null) { throw new RuntimeException("Publish timed out"); } } catch (InterruptedException e) { throw new RuntimeException(e); } } return msgs; } protected void acquireTopic(ByteString topic) throws Exception { Semaphore latch = new Semaphore(1); latch.acquire(); manager.acquiredTopic(topic, new Callback() { @Override public void operationFinished(Object ctx, Void resultOfOperation) { failureException = null; ((Semaphore)ctx).release(); } @Override public void operationFailed(Object ctx, PubSubException exception) { failureException = exception; ((Semaphore)ctx).release(); } }, latch); latch.acquire(); latch.release(); if (null != failureException) { throw failureException; } } protected void releaseTopic(final ByteString topic) throws Exception { manager.lostTopic(topic); // backward testing ledger ranges without start seq id if (removeStartSeqId) { Semaphore latch = new Semaphore(1); latch.acquire(); tpManager.readTopicPersistenceInfo(topic, new Callback>() { @Override public void operationFinished(Object ctx, Versioned ranges) { if (null == ranges) { failureException = new PubSubException.NoTopicPersistenceInfoException("No persistence info found for topic " + topic.toStringUtf8()); ((Semaphore)ctx).release(); return; } // build a new ledger ranges w/o start seq id. LedgerRanges.Builder builder = LedgerRanges.newBuilder(); final List rangesList = ranges.getValue().getRangesList(); for (LedgerRange range : rangesList) { LedgerRange.Builder newRangeBuilder = LedgerRange.newBuilder(); newRangeBuilder.setLedgerId(range.getLedgerId()); if (range.hasEndSeqIdIncluded()) { newRangeBuilder.setEndSeqIdIncluded(range.getEndSeqIdIncluded()); } builder.addRanges(newRangeBuilder.build()); } tpManager.writeTopicPersistenceInfo(topic, builder.build(), ranges.getVersion(), new Callback() { @Override public void operationFinished(Object ctx, Version newVersion) { failureException = null; ((Semaphore)ctx).release(); } @Override public void operationFailed(Object ctx, PubSubException exception) { failureException = exception; ((Semaphore)ctx).release(); } }, ctx); } @Override public void operationFailed(Object ctx, PubSubException exception) { failureException = exception; ((Semaphore)ctx).release(); } }, latch); latch.acquire(); latch.release(); if (null != failureException) { throw failureException; } } } protected void consumedUntil(ByteString topic, long seqId) throws Exception { manager.consumedUntil(topic, seqId); } } TestBookKeeperPersistenceManagerBlackBox.java000066400000000000000000000055541244507361200424350ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import junit.framework.Test; import junit.framework.TestSuite; import org.junit.After; import org.junit.Before; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.topics.TrivialOwnAllTopicManager; public class TestBookKeeperPersistenceManagerBlackBox extends TestPersistenceManagerBlackBox { BookKeeperTestBase bktb; private final int numBookies = 3; MetadataManagerFactory metadataManagerFactory = null; @Override @Before protected void setUp() throws Exception { // We need to setUp this class first since the super.setUp() method will // need the BookKeeperTestBase to be instantiated. bktb = new BookKeeperTestBase(numBookies); bktb.setUp(); super.setUp(); } @Override @After protected void tearDown() throws Exception { bktb.tearDown(); super.tearDown(); if (null != metadataManagerFactory) { metadataManagerFactory.shutdown(); } } @Override long getLowestSeqId() { return 1; } @Override PersistenceManager instantiatePersistenceManager() throws Exception { ServerConfiguration conf = new ServerConfiguration(); ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); metadataManagerFactory = MetadataManagerFactory.newMetadataManagerFactory(conf, bktb.getZooKeeperClient()); return new BookkeeperPersistenceManager(bktb.bk, metadataManagerFactory, new TrivialOwnAllTopicManager(conf, scheduler), conf, scheduler); } @Override public long getExpectedSeqId(int numPublished) { return numPublished; } public static Test suite() { return new TestSuite(TestBookKeeperPersistenceManagerBlackBox.class); } } TestBookkeeperPersistenceManagerWhiteBox.java000066400000000000000000000360221244507361200425330ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.util.List; import java.util.Random; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import junit.framework.TestCase; import org.apache.bookkeeper.client.BookKeeper; import org.apache.hedwig.protocol.PubSubProtocol; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.apache.hedwig.util.Either; import com.google.protobuf.ByteString; import org.apache.hedwig.HelperMethods; import org.apache.hedwig.StubCallback; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.server.topics.TrivialOwnAllTopicManager; import org.apache.hedwig.util.ConcurrencyUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TestBookkeeperPersistenceManagerWhiteBox extends TestCase { protected static Logger logger = LoggerFactory.getLogger(TestBookkeeperPersistenceManagerWhiteBox.class); BookKeeperTestBase bktb; private final int numBookies = 3; BookkeeperPersistenceManager bkpm; MetadataManagerFactory mm; ServerConfiguration conf; ScheduledExecutorService scheduler; TopicManager tm; ByteString topic = ByteString.copyFromUtf8("topic0"); @Override @Before protected void setUp() throws Exception { super.setUp(); bktb = new BookKeeperTestBase(numBookies); bktb.setUp(); conf = new ServerConfiguration(); scheduler = Executors.newScheduledThreadPool(1); tm = new TrivialOwnAllTopicManager(conf, scheduler); mm = MetadataManagerFactory.newMetadataManagerFactory(conf, bktb.getZooKeeperClient()); bkpm = new BookkeeperPersistenceManager(bktb.bk, mm, tm, conf, scheduler); } @Override @After protected void tearDown() throws Exception { mm.shutdown(); bktb.tearDown(); super.tearDown(); } @Test(timeout=60000) public void testEmptyDirtyLedger() throws Exception { StubCallback stubCallback = new StubCallback(); bkpm.acquiredTopic(topic, stubCallback, null); assertNull(ConcurrencyUtils.take(stubCallback.queue).right()); // now abandon, and try another time, the prev ledger should be dirty bkpm = new BookkeeperPersistenceManager(new BookKeeper(bktb.getZkHostPort()), mm, tm, conf, scheduler); bkpm.acquiredTopic(topic, stubCallback, null); assertNull(ConcurrencyUtils.take(stubCallback.queue).right()); assertEquals(0, bkpm.topicInfos.get(topic).ledgerRanges.size()); } @Test(timeout=60000) public void testNonEmptyDirtyLedger() throws Exception { Random r = new Random(); int NUM_MESSAGES_TO_TEST = 100; int SIZE_OF_MESSAGES_TO_TEST = 100; int index = 0; int numPrevLedgers = 0; List messages = HelperMethods.getRandomPublishedMessages(NUM_MESSAGES_TO_TEST, SIZE_OF_MESSAGES_TO_TEST); while (index < messages.size()) { StubCallback stubCallback = new StubCallback(); bkpm.acquiredTopic(topic, stubCallback, null); assertNull(ConcurrencyUtils.take(stubCallback.queue).right()); assertEquals(numPrevLedgers, bkpm.topicInfos.get(topic).ledgerRanges.size()); StubCallback persistCallback = new StubCallback(); bkpm.persistMessage(new PersistRequest(topic, messages.get(index), persistCallback, null)); assertEquals(index + 1, ConcurrencyUtils.take(persistCallback.queue).left().getLocalComponent()); index++; // once in every 10 times, give up ledger if (r.nextInt(10) == 9) { // should not release topic when the message is last message // otherwise when we call scan, bookkeeper persistence manager doesn't own the topic if (index < messages.size()) { // Make the bkpm lose its memory bkpm.topicInfos.clear(); numPrevLedgers++; } } } // Lets scan now StubScanCallback scanCallback = new StubScanCallback(); bkpm.scanMessages(new RangeScanRequest(topic, 1, NUM_MESSAGES_TO_TEST, Long.MAX_VALUE, scanCallback, null)); for (int i = 0; i < messages.size(); i++) { Message scannedMessage = ConcurrencyUtils.take(scanCallback.queue).left(); assertTrue(messages.get(i).getBody().equals(scannedMessage.getBody())); assertEquals(i + 1, scannedMessage.getMsgId().getLocalComponent()); } assertTrue(StubScanCallback.END_MESSAGE == ConcurrencyUtils.take(scanCallback.queue).left()); } static final long maxEntriesPerLedger = 10; class ChangeLedgerServerConfiguration extends ServerConfiguration { @Override public long getMaxEntriesPerLedger() { return maxEntriesPerLedger; } } @Test(timeout=60000) public void testSyncChangeLedgers() throws Exception { int NUM_MESSAGES_TO_TEST = 101; int SIZE_OF_MESSAGES_TO_TEST = 100; int index = 0; List messages = HelperMethods.getRandomPublishedMessages(NUM_MESSAGES_TO_TEST, SIZE_OF_MESSAGES_TO_TEST); bkpm = new BookkeeperPersistenceManager(bktb.bk, mm, tm, new ChangeLedgerServerConfiguration(), scheduler); // acquire the topic StubCallback stubCallback = new StubCallback(); bkpm.acquiredTopic(topic, stubCallback, null); assertNull(ConcurrencyUtils.take(stubCallback.queue).right()); assertEquals(0, bkpm.topicInfos.get(topic).ledgerRanges.size()); while (index < messages.size()) { logger.debug("Persist message {}", (index + 1)); StubCallback persistCallback = new StubCallback(); bkpm.persistMessage(new PersistRequest(topic, messages.get(index), persistCallback, null)); assertEquals(index + 1, ConcurrencyUtils.take(persistCallback.queue).left().getLocalComponent()); index++; if (index % maxEntriesPerLedger == 1) { assertEquals(index / maxEntriesPerLedger, bkpm.topicInfos.get(topic).ledgerRanges.size()); } } assertEquals(NUM_MESSAGES_TO_TEST / maxEntriesPerLedger, bkpm.topicInfos.get(topic).ledgerRanges.size()); // Lets scan now StubScanCallback scanCallback = new StubScanCallback(); bkpm.scanMessages(new RangeScanRequest(topic, 1, NUM_MESSAGES_TO_TEST, Long.MAX_VALUE, scanCallback, null)); for (int i = 0; i < messages.size(); i++) { Message scannedMessage = ConcurrencyUtils.take(scanCallback.queue).left(); assertTrue(messages.get(i).getBody().equals(scannedMessage.getBody())); assertEquals(i + 1, scannedMessage.getMsgId().getLocalComponent()); } assertTrue(StubScanCallback.END_MESSAGE == ConcurrencyUtils.take(scanCallback.queue).left()); // Make the bkpm lose its memory bkpm.topicInfos.clear(); // acquire the topic again stubCallback = new StubCallback(); bkpm.acquiredTopic(topic, stubCallback, null); assertNull(ConcurrencyUtils.take(stubCallback.queue).right()); assertEquals(NUM_MESSAGES_TO_TEST / maxEntriesPerLedger + 1, bkpm.topicInfos.get(topic).ledgerRanges.size()); } class OrderCheckingCallback extends StubCallback { long curMsgId; int numMessages; int numProcessed; int numSuccess; int numFailed; OrderCheckingCallback(long startMsgId, int numMessages) { this.curMsgId = startMsgId; this.numMessages = numMessages; numProcessed = numSuccess = numFailed = 0; } @Override public void operationFailed(Object ctx, final PubSubException exception) { synchronized (this) { ++numFailed; ++numProcessed; if (numProcessed == numMessages) { MessageSeqId.Builder seqIdBuilder = MessageSeqId.newBuilder().setLocalComponent(curMsgId); super.operationFinished(ctx, seqIdBuilder.build()); } } } @Override public void operationFinished(Object ctx, final MessageSeqId seqId) { synchronized(this) { long msgId = seqId.getLocalComponent(); if (msgId == curMsgId) { ++curMsgId; } ++numSuccess; ++numProcessed; if (numProcessed == numMessages) { MessageSeqId.Builder seqIdBuilder = MessageSeqId.newBuilder().setLocalComponent(curMsgId); super.operationFinished(ctx, seqIdBuilder.build()); } } } } @Test(timeout=60000) public void testAsyncChangeLedgers() throws Exception { int NUM_MESSAGES_TO_TEST = 101; int SIZE_OF_MESSAGES_TO_TEST = 100; List messages = HelperMethods.getRandomPublishedMessages(NUM_MESSAGES_TO_TEST, SIZE_OF_MESSAGES_TO_TEST); bkpm = new BookkeeperPersistenceManager(bktb.bk, mm, tm, new ChangeLedgerServerConfiguration(), scheduler); // acquire the topic StubCallback stubCallback = new StubCallback(); bkpm.acquiredTopic(topic, stubCallback, null); assertNull(ConcurrencyUtils.take(stubCallback.queue).right()); assertEquals(0, bkpm.topicInfos.get(topic).ledgerRanges.size()); OrderCheckingCallback persistCallback = new OrderCheckingCallback(1, NUM_MESSAGES_TO_TEST); for (Message message : messages) { bkpm.persistMessage(new PersistRequest(topic, message, persistCallback, null)); } assertEquals(NUM_MESSAGES_TO_TEST + 1, ConcurrencyUtils.take(persistCallback.queue).left().getLocalComponent()); assertEquals(NUM_MESSAGES_TO_TEST, persistCallback.numSuccess); assertEquals(0, persistCallback.numFailed); assertEquals(NUM_MESSAGES_TO_TEST / maxEntriesPerLedger, bkpm.topicInfos.get(topic).ledgerRanges.size()); // ensure the bkpm has the topic before scanning stubCallback = new StubCallback(); bkpm.acquiredTopic(topic, stubCallback, null); // Lets scan now StubScanCallback scanCallback = new StubScanCallback(); bkpm.scanMessages(new RangeScanRequest(topic, 1, NUM_MESSAGES_TO_TEST, Long.MAX_VALUE, scanCallback, null)); for (int i = 0; i < messages.size(); i++) { Either e = ConcurrencyUtils.take(scanCallback.queue); Message scannedMessage = e.left(); if (scannedMessage == null) { throw e.right(); } assertTrue(messages.get(i).getBody().equals(scannedMessage.getBody())); assertEquals(i + 1, scannedMessage.getMsgId().getLocalComponent()); } assertTrue(StubScanCallback.END_MESSAGE == ConcurrencyUtils.take(scanCallback.queue).left()); // Make the bkpm lose its memory bkpm.topicInfos.clear(); // acquire the topic again stubCallback = new StubCallback(); bkpm.acquiredTopic(topic, stubCallback, null); assertNull(ConcurrencyUtils.take(stubCallback.queue).right()); assertEquals(NUM_MESSAGES_TO_TEST / maxEntriesPerLedger + 1, bkpm.topicInfos.get(topic).ledgerRanges.size()); } class ChangeLedgerCallback extends OrderCheckingCallback { boolean tearDown = false; ChangeLedgerCallback(long startMsgId, int numMessages) { super(startMsgId, numMessages); } @Override public void operationFinished(Object ctx, final MessageSeqId msgId) { super.operationFinished(ctx, msgId); // shutdown bookie server when changing ledger // so following requests should fail if (msgId.getLocalComponent() >= maxEntriesPerLedger && !tearDown) { try { bktb.tearDownOneBookieServer(); bktb.tearDownOneBookieServer(); } catch (Exception e) { logger.error("Failed to tear down bookie server."); } tearDown = true; } } } @Test(timeout=60000) public void testChangeLedgerFailure() throws Exception { int NUM_MESSAGES_TO_TEST = 101; int SIZE_OF_MESSAGES_TO_TEST = 100; List messages = HelperMethods.getRandomPublishedMessages(NUM_MESSAGES_TO_TEST, SIZE_OF_MESSAGES_TO_TEST); bkpm = new BookkeeperPersistenceManager(bktb.bk, mm, tm, new ChangeLedgerServerConfiguration(), scheduler); // acquire the topic StubCallback stubCallback = new StubCallback(); bkpm.acquiredTopic(topic, stubCallback, null); assertNull(ConcurrencyUtils.take(stubCallback.queue).right()); assertEquals(0, bkpm.topicInfos.get(topic).ledgerRanges.size()); ChangeLedgerCallback persistCallback = new ChangeLedgerCallback(1, NUM_MESSAGES_TO_TEST); for (Message message : messages) { bkpm.persistMessage(new PersistRequest(topic, message, persistCallback, null)); } assertEquals(maxEntriesPerLedger + 1, ConcurrencyUtils.take(persistCallback.queue).left().getLocalComponent()); assertEquals(maxEntriesPerLedger, persistCallback.numSuccess); assertEquals(NUM_MESSAGES_TO_TEST - maxEntriesPerLedger, persistCallback.numFailed); } } TestDeadlock.java000066400000000000000000000235041244507361200350020ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import java.io.IOException; import java.util.concurrent.CountDownLatch; import java.util.concurrent.SynchronousQueue; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.protobuf.ByteString; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.server.HedwigHubTestBase; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; public class TestDeadlock extends HedwigHubTestBase { protected static Logger logger = LoggerFactory.getLogger(TestDeadlock.class); // Client side variables protected HedwigClient client; protected Publisher publisher; protected Subscriber subscriber; ByteString topic = ByteString.copyFromUtf8("DeadLockTopic"); ByteString subscriberId = ByteString.copyFromUtf8("dl"); public TestDeadlock() { super(1); } @Override @Before public void setUp() throws Exception { numBookies = 1; readDelay = 1000L; // 1s super.setUp(); client = new HedwigClient(new HubClientConfiguration()); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } @Override @After public void tearDown() throws Exception { client.close(); super.tearDown(); } // Test implementation of Callback for async client actions. static class TestCallback implements Callback { private final SynchronousQueue queue; public TestCallback(SynchronousQueue queue) { this.queue = queue; } @Override public void operationFinished(Object ctx, Void resultOfOperation) { new Thread(new Runnable() { @Override public void run() { if (logger.isDebugEnabled()) logger.debug("Operation finished!"); ConcurrencyUtils.put(queue, true); } }).start(); } @Override public void operationFailed(Object ctx, final PubSubException exception) { new Thread(new Runnable() { @Override public void run() { logger.error("Operation failed!", exception); ConcurrencyUtils.put(queue, false); } }).start(); } } // Test implementation of subscriber's message handler. class TestMessageHandler implements MessageHandler { private final SynchronousQueue consumeQueue; boolean doAdd = false; public TestMessageHandler(SynchronousQueue consumeQueue) { this.consumeQueue = consumeQueue; } public void deliver(ByteString t, ByteString sub, final Message msg, Callback callback, Object context) { if (!doAdd) { // after receiving first message, we send a publish // to obtain permit of second ledger doAdd = true; new Thread(new Runnable() { @Override public void run() { // publish messages again to obtain permits logger.info("Start publishing message to obtain permit"); // it obtains the permit and wait for a response, // but the response is delayed and readEntries is called // in the readComplete callback to read entries of the // same ledger. since there is no permit, it blocks try { CountDownLatch latch = new CountDownLatch(1); sleepBookies(8, latch); latch.await(); SynchronousQueue queue = new SynchronousQueue(); for (int i=0; i<3; i++) { publisher.asyncPublish(topic, getMsg(9999), new TestCallback(queue), null); } for (int i=0; i<3; i++) { if (!queue.take()) { logger.error("Error publishing to topic {}", topic); ConcurrencyUtils.put(consumeQueue, false); } } } catch (Exception e) { logger.error("Failed to publish message to obtain permit."); } } }).start(); } new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(consumeQueue, true); } }).start(); callback.operationFinished(context, null); } } // Helper function to generate Messages protected Message getMsg(int msgNum) { return Message.newBuilder().setBody(ByteString.copyFromUtf8("Message" + msgNum)).build(); } // Helper function to generate Topics protected ByteString getTopic(int topicNum) { return ByteString.copyFromUtf8("DeadLockTopic" + topicNum); } class TestServerConfiguration extends HubServerConfiguration { public TestServerConfiguration(int serverPort, int sslServerPort) { super(serverPort, sslServerPort); } @Override public int getBkEnsembleSize() { return 1; } @Override public int getBkQuorumSize() { return 1; } @Override public int getReadAheadCount() { return 4; } @Override public long getMaximumCacheSize() { return 32; } } @SuppressWarnings("deprecation") @Override protected ServerConfiguration getServerConfiguration(int serverPort, int sslServerPort) { ServerConfiguration serverConf = new TestServerConfiguration(serverPort, sslServerPort); org.apache.bookkeeper.conf.ClientConfiguration bkClientConf = new org.apache.bookkeeper.conf.ClientConfiguration(); bkClientConf.setNumWorkerThreads(1).setReadTimeout(9999) .setThrottleValue(3); try { serverConf.addConf(bkClientConf); } catch (Exception e) { } return serverConf; } @Test(timeout=60000) public void testDeadlock() throws Exception { int numMessages = 5; SynchronousQueue consumeQueue = new SynchronousQueue(); // subscribe to topic logger.info("Setup subscriptions"); subscriber.subscribe(topic, subscriberId, CreateOrAttach.CREATE_OR_ATTACH); subscriber.closeSubscription(topic, subscriberId); // publish 5 messages to form first ledger for (int i=0; i { public void operationFailed(Object ctx, PubSubException exception) { throw (failureException = new RuntimeException(exception)); } @SuppressWarnings("unchecked") public void operationFinished(Object ctx, PubSubProtocol.MessageSeqId resultOfOperation) { LinkedBlockingQueue statusQueue = (LinkedBlockingQueue) ctx; try { statusQueue.put(true); } catch (InterruptedException e) { throw (failureException = new RuntimeException(e)); } } } class RangeScanVerifierListener implements ScanCallback { List pubMsgs; public RangeScanVerifierListener(List pubMsgs) { this.pubMsgs = pubMsgs; } public void messageScanned(Object ctx, Message recvMessage) { if (pubMsgs.isEmpty()) { throw (failureException = new RuntimeException("Message received when none expected")); } Message pubMsg = pubMsgs.get(0); if (!HelperMethods.areEqual(recvMessage, pubMsg)) { throw (failureException = new RuntimeException("Scanned message not equal to expected")); } pubMsgs.remove(0); } public void scanFailed(Object ctx, Exception exception) { throw (failureException = new RuntimeException(exception)); } @SuppressWarnings("unchecked") public void scanFinished(Object ctx, ReasonForFinish reason) { if (reason != ReasonForFinish.NO_MORE_MESSAGES) { throw (failureException = new RuntimeException("Scan finished prematurely " + reason)); } LinkedBlockingQueue statusQueue = (LinkedBlockingQueue) ctx; try { statusQueue.put(true); } catch (InterruptedException e) { throw (failureException = new RuntimeException(e)); } } } class PointScanVerifierListener implements ScanCallback { List pubMsgs; ByteString topic; public PointScanVerifierListener(List pubMsgs, ByteString topic) { this.topic = topic; this.pubMsgs = pubMsgs; } @SuppressWarnings("unchecked") public void messageScanned(Object ctx, Message recvMessage) { Message pubMsg = pubMsgs.get(0); if (!HelperMethods.areEqual(recvMessage, pubMsg)) { throw (failureException = new RuntimeException("Scanned message not equal to expected")); } pubMsgs.remove(0); if (pubMsgs.isEmpty()) { LinkedBlockingQueue statusQueue = (LinkedBlockingQueue) ctx; try { statusQueue.put(true); } catch (InterruptedException e) { throw (failureException = new RuntimeException(e)); } } else { long seqId = recvMessage.getMsgId().getLocalComponent(); seqId = persistenceManager.getSeqIdAfterSkipping(topic, seqId, 1); ScanRequest request = new ScanRequest(topic, seqId, new PointScanVerifierListener(pubMsgs, topic), ctx); persistenceManager.scanSingleMessage(request); } } public void scanFailed(Object ctx, Exception exception) { throw (failureException = new RuntimeException(exception)); } public void scanFinished(Object ctx, ReasonForFinish reason) { } } class ScanVerifier implements Runnable { List pubMsgs; ByteString topic; LinkedBlockingQueue statusQueue = new LinkedBlockingQueue(); public ScanVerifier(ByteString topic, List pubMsgs) { this.topic = topic; this.pubMsgs = pubMsgs; } public void run() { // start the scan try { if (persistenceManager instanceof PersistenceManagerWithRangeScan) { ScanCallback listener = new RangeScanVerifierListener(pubMsgs); PersistenceManagerWithRangeScan rangePersistenceManager = (PersistenceManagerWithRangeScan) persistenceManager; rangePersistenceManager.scanMessages(new RangeScanRequest(topic, getLowestSeqId(), NUM_MESSAGES_TO_TEST + 1, Long.MAX_VALUE, listener, statusQueue)); } else { ScanCallback listener = new PointScanVerifierListener(pubMsgs, topic); persistenceManager .scanSingleMessage(new ScanRequest(topic, getLowestSeqId(), listener, statusQueue)); } // now listen for it to finish // wait a maximum of a minute Boolean b = statusQueue.poll(60, TimeUnit.SECONDS); if (b == null) { throw (failureException = new RuntimeException("Scanning timed out")); } } catch (InterruptedException e) { throw (failureException = new RuntimeException(e)); } } } class Publisher implements Runnable { List pubMsgs; ByteString topic; public Publisher(ByteString topic, List pubMsgs) { this.pubMsgs = pubMsgs; this.topic = topic; } public void run() { LinkedBlockingQueue statusQueue = new LinkedBlockingQueue(); for (Message msg : pubMsgs) { try { persistenceManager.persistMessage(new PersistRequest(topic, msg, testCallback, statusQueue)); // wait a maximum of a minute Boolean b = statusQueue.poll(60, TimeUnit.SECONDS); if (b == null) { throw (failureException = new RuntimeException("Scanning timed out")); } } catch (InterruptedException e) { throw (failureException = new RuntimeException(e)); } } } } @Override protected void setUp() throws Exception { logger.info("STARTING " + getName()); persistenceManager = instantiatePersistenceManager(); failureException = null; logger.info("Persistence Manager test setup finished"); } abstract long getLowestSeqId(); abstract PersistenceManager instantiatePersistenceManager() throws Exception; @Override protected void tearDown() throws Exception { logger.info("tearDown starting"); persistenceManager.stop(); super.tearDown(); logger.info("FINISHED " + getName()); } protected ByteString getTopicName(int number) { return ByteString.copyFromUtf8("topic" + number); } @Test(timeout=60000) public void testPersistenceManager() throws Exception { List publisherThreads = new LinkedList(); List scannerThreads = new LinkedList(); Thread thread; Semaphore latch = new Semaphore(1); for (int i = 0; i < NUM_TOPICS_TO_TEST; i++) { ByteString topic = getTopicName(i); if (persistenceManager instanceof TopicOwnershipChangeListener) { TopicOwnershipChangeListener tocl = (TopicOwnershipChangeListener) persistenceManager; latch.acquire(); tocl.acquiredTopic(topic, new Callback() { @Override public void operationFailed(Object ctx, PubSubException exception) { failureException = new RuntimeException(exception); ((Semaphore) ctx).release(); } @Override public void operationFinished(Object ctx, Void res) { ((Semaphore) ctx).release(); } }, latch); latch.acquire(); latch.release(); if (failureException != null) { throw (Exception) failureException.getCause(); } } List msgs = HelperMethods.getRandomPublishedMessages(NUM_MESSAGES_TO_TEST, 1024); thread = new Thread(new Publisher(topic, msgs)); publisherThreads.add(thread); thread.start(); thread = new Thread(new ScanVerifier(topic, msgs)); scannerThreads.add(thread); } for (Thread t : publisherThreads) { t.join(); } for (Thread t : scannerThreads) { t.start(); } for (Thread t : scannerThreads) { t.join(); } assertEquals(null, failureException); for (int i = 0; i < NUM_TOPICS_TO_TEST; i++) { assertEquals(persistenceManager.getCurrentSeqIdForTopic(getTopicName(i)).getLocalComponent(), getExpectedSeqId(NUM_MESSAGES_TO_TEST)); } } abstract long getExpectedSeqId(int numPublished); } TestReadAheadCacheBlackBox.java000066400000000000000000000035041244507361200374220ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import junit.framework.Test; import junit.framework.TestSuite; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.persistence.LocalDBPersistenceManager; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.persistence.ReadAheadCache; public class TestReadAheadCacheBlackBox extends TestPersistenceManagerBlackBox { @Override protected void tearDown() throws Exception { super.tearDown(); LocalDBPersistenceManager.instance().reset(); } @Override long getExpectedSeqId(int numPublished) { return numPublished; } @Override long getLowestSeqId() { return 1; } @Override PersistenceManager instantiatePersistenceManager() { return new ReadAheadCache(LocalDBPersistenceManager.instance(), new ServerConfiguration()).start(); } public static Test suite() { return new TestSuite(TestReadAheadCacheBlackBox.class); } } TestReadAheadCacheWhiteBox.java000066400000000000000000000302331244507361200374650ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/persistence/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.persistence; import static org.junit.Assert.*; import java.util.List; import org.apache.hedwig.protocol.PubSubProtocol; import org.junit.After; import org.junit.Before; import org.junit.Test; import com.google.protobuf.ByteString; import org.apache.bookkeeper.util.MathUtils; import org.apache.hedwig.HelperMethods; import org.apache.hedwig.StubCallback; import org.apache.hedwig.StubScanCallback; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.ConcurrencyUtils; public class TestReadAheadCacheWhiteBox { ByteString topic = ByteString.copyFromUtf8("testTopic"); final static int NUM_MESSAGES = 10; final static int MSG_SIZE = 50; List messages = HelperMethods.getRandomPublishedMessages(NUM_MESSAGES, MSG_SIZE); StubPersistenceManager stubPersistenceManager; ReadAheadCache cacheBasedPersistenceManager; MyServerConfiguration myConf = new MyServerConfiguration(); class MyReadAheadCache extends ReadAheadCache { public MyReadAheadCache(PersistenceManagerWithRangeScan persistenceManger, ServerConfiguration cfg) { super(persistenceManger, cfg); } @Override protected void enqueueWithoutFailureByTopic(ByteString topic, final CacheRequest obj) { // make it perform in the same thread obj.performRequest(); } } class MyServerConfiguration extends ServerConfiguration { // Note these are set up, so that the size limit will be reached before // the count limit int readAheadCount = NUM_MESSAGES / 2; long readAheadSize = (long) (MSG_SIZE * 2.5); long maxCacheSize = Integer.MAX_VALUE; long cacheEntryTTL = 0L; @Override public int getReadAheadCount() { return readAheadCount; } @Override public long getReadAheadSizeBytes() { return readAheadSize; } @Override public long getMaximumCacheSize() { return maxCacheSize; } @Override public long getCacheEntryTTL() { return cacheEntryTTL; } } @Before public void setUp() throws Exception { stubPersistenceManager = new StubPersistenceManager(); cacheBasedPersistenceManager = new MyReadAheadCache(stubPersistenceManager, myConf).start(); } @After public void tearDown() throws Exception { } @Test(timeout=60000) public void testPersistMessage() throws Exception { StubCallback callback = new StubCallback(); PersistRequest request = new PersistRequest(topic, messages.get(0), callback, null); stubPersistenceManager.failure = true; cacheBasedPersistenceManager.persistMessage(request); assertNotNull(ConcurrencyUtils.take(callback.queue).right()); CacheKey key = new CacheKey(topic, cacheBasedPersistenceManager.getCurrentSeqIdForTopic(topic) .getLocalComponent()); assertFalse(cacheBasedPersistenceManager.cache.containsKey(key)); stubPersistenceManager.failure = false; persistMessage(messages.get(0)); } private void persistMessage(Message msg) throws Exception { StubCallback callback = new StubCallback(); PersistRequest request = new PersistRequest(topic, msg, callback, null); cacheBasedPersistenceManager.persistMessage(request); assertNotNull(ConcurrencyUtils.take(callback.queue).left()); CacheKey key = new CacheKey(topic, cacheBasedPersistenceManager.getCurrentSeqIdForTopic(topic) .getLocalComponent()); CacheValue cacheValue = cacheBasedPersistenceManager.cache.get(key); assertNotNull(cacheValue); assertFalse(cacheValue.isStub()); assertTrue(HelperMethods.areEqual(cacheValue.getMessage(), msg)); } @Test(timeout=60000) public void testScanSingleMessage() throws Exception { StubScanCallback callback = new StubScanCallback(); ScanRequest request = new ScanRequest(topic, 1, callback, null); stubPersistenceManager.failure = true; cacheBasedPersistenceManager.scanSingleMessage(request); assertTrue(callback.isFailed()); assertTrue(0 == cacheBasedPersistenceManager.cache.size()); stubPersistenceManager.failure = false; cacheBasedPersistenceManager.scanSingleMessage(request); assertTrue(myConf.readAheadCount == cacheBasedPersistenceManager.cache.size()); persistMessage(messages.get(0)); assertTrue(callback.isSuccess()); } @Test(timeout=60000) public void testDeliveredUntil() throws Exception { for (Message m : messages) { persistMessage(m); } assertEquals((long) NUM_MESSAGES * MSG_SIZE, cacheBasedPersistenceManager.presentCacheSize.get()); long middle = messages.size() / 2; cacheBasedPersistenceManager.deliveredUntil(topic, middle); assertEquals(messages.size() - middle, cacheBasedPersistenceManager.cache.size()); long middle2 = middle - 1; cacheBasedPersistenceManager.deliveredUntil(topic, middle2); // should have no effect assertEquals(messages.size() - middle, cacheBasedPersistenceManager.cache.size()); // delivered all messages cacheBasedPersistenceManager.deliveredUntil(topic, (long) messages.size()); // should have no effect assertTrue(cacheBasedPersistenceManager.cache.isEmpty()); assertTrue(cacheBasedPersistenceManager.cacheSegment.get() .timeIndexOfAddition.isEmpty()); assertTrue(cacheBasedPersistenceManager.orderedIndexOnSeqId.isEmpty()); assertTrue(0 == cacheBasedPersistenceManager.presentCacheSize.get()); } @Test(timeout=60000) public void testDoReadAhead() { StubScanCallback callback = new StubScanCallback(); ScanRequest request = new ScanRequest(topic, 1, callback, null); cacheBasedPersistenceManager.doReadAhead(request); assertEquals(myConf.readAheadCount, cacheBasedPersistenceManager.cache.size()); request = new ScanRequest(topic, myConf.readAheadCount / 2 - 1, callback, null); cacheBasedPersistenceManager.doReadAhead(request); assertEquals(myConf.readAheadCount, cacheBasedPersistenceManager.cache.size()); request = new ScanRequest(topic, myConf.readAheadCount / 2 + 2, callback, null); cacheBasedPersistenceManager.doReadAhead(request); assertEquals((int) (1.5 * myConf.readAheadCount), cacheBasedPersistenceManager.cache.size()); } @Test(timeout=60000) public void testReadAheadSizeLimit() throws Exception { for (Message m : messages) { persistMessage(m); } cacheBasedPersistenceManager.cache.clear(); StubScanCallback callback = new StubScanCallback(); ScanRequest request = new ScanRequest(topic, 1, callback, null); cacheBasedPersistenceManager.scanSingleMessage(request); assertTrue(callback.isSuccess()); assertEquals((int) Math.ceil(myConf.readAheadSize / (MSG_SIZE + 0.0)), cacheBasedPersistenceManager.cache .size()); } @Test(timeout=60000) public void testDoReadAheadStartingFrom() throws Exception { persistMessage(messages.get(0)); int readAheadCount = 5; int start = 1; RangeScanRequest readAheadRequest = cacheBasedPersistenceManager.doReadAheadStartingFrom(topic, start, readAheadCount); assertNull(readAheadRequest); StubScanCallback callback = new StubScanCallback(); int end = 100; ScanRequest request = new ScanRequest(topic, end, callback, null); cacheBasedPersistenceManager.doReadAhead(request); int pos = 98; readAheadRequest = cacheBasedPersistenceManager.doReadAheadStartingFrom(topic, pos, readAheadCount); assertEquals(readAheadRequest.messageLimit, end - pos); end = 200; request = new ScanRequest(topic, end, callback, null); cacheBasedPersistenceManager.doReadAhead(request); // too far back pos = 150; readAheadRequest = cacheBasedPersistenceManager.doReadAheadStartingFrom(topic, pos, readAheadCount); assertEquals(readAheadRequest.messageLimit, readAheadCount); } @Test(timeout=60000) public void testAddMessageToCache() { CacheKey key = new CacheKey(topic, 1); cacheBasedPersistenceManager.addMessageToCache(key, messages.get(0), MathUtils.now()); assertEquals(1, cacheBasedPersistenceManager.cache.size()); assertEquals(MSG_SIZE, cacheBasedPersistenceManager.presentCacheSize.get()); assertEquals(1, cacheBasedPersistenceManager.orderedIndexOnSeqId.get(topic).size()); assertTrue(cacheBasedPersistenceManager.orderedIndexOnSeqId.get(topic).contains(1L)); CacheValue value = cacheBasedPersistenceManager.cache.get(key); assertTrue(cacheBasedPersistenceManager.cacheSegment.get() .timeIndexOfAddition.get(value.timeOfAddition).contains(key)); } @Test(timeout=60000) public void testRemoveMessageFromCache() { CacheKey key = new CacheKey(topic, 1); cacheBasedPersistenceManager.addMessageToCache(key, messages.get(0), MathUtils.now()); cacheBasedPersistenceManager.removeMessageFromCache(key, new Exception(), true, true); assertTrue(cacheBasedPersistenceManager.cache.isEmpty()); assertTrue(cacheBasedPersistenceManager.orderedIndexOnSeqId.isEmpty()); assertTrue(cacheBasedPersistenceManager.cacheSegment.get() .timeIndexOfAddition.isEmpty()); } @Test(timeout=60000) public void testCollectOldCacheEntries() { int i = 1; for (Message m : messages) { CacheKey key = new CacheKey(topic, i); cacheBasedPersistenceManager.addMessageToCache(key, m, i); i++; } int n = 2; myConf.maxCacheSize = n * MSG_SIZE * myConf.getNumReadAheadCacheThreads(); cacheBasedPersistenceManager.reloadConf(myConf); cacheBasedPersistenceManager.collectOldOrExpiredCacheEntries( cacheBasedPersistenceManager.cacheSegment.get()); assertEquals(n, cacheBasedPersistenceManager.cache.size()); assertEquals(n, cacheBasedPersistenceManager.cacheSegment.get() .timeIndexOfAddition.size()); } @Test(timeout=60000) public void testCollectExpiredCacheEntries() throws Exception { int i = 1; int n = 2; long ttl = 5000L; myConf.cacheEntryTTL = ttl; long curTime = MathUtils.now(); cacheBasedPersistenceManager.reloadConf(myConf); for (Message m : messages) { CacheKey key = new CacheKey(topic, i); cacheBasedPersistenceManager.addMessageToCache(key, m, curTime++); if (i == NUM_MESSAGES - n) { Thread.sleep(2 * ttl); curTime += 2 * ttl; } i++; } assertEquals(n, cacheBasedPersistenceManager.cache.size()); assertEquals(n, cacheBasedPersistenceManager.cacheSegment.get() .timeIndexOfAddition.size()); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/subscriptions/000077500000000000000000000000001244507361200322275ustar00rootroot00000000000000StubSubscriptionManager.java000066400000000000000000000044651244507361200376410ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import java.util.concurrent.ScheduledExecutorService; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.delivery.DeliveryManager; import org.apache.hedwig.server.persistence.PersistenceManager; import org.apache.hedwig.server.topics.TopicManager; import org.apache.hedwig.util.Callback; public class StubSubscriptionManager extends InMemorySubscriptionManager { boolean fail = false; public void setFail(boolean fail) { this.fail = fail; } public StubSubscriptionManager(TopicManager tm, PersistenceManager pm, DeliveryManager dm, ServerConfiguration conf, ScheduledExecutorService scheduler) { super(conf, tm, pm, dm, scheduler); } @Override public void serveSubscribeRequest(ByteString topic, SubscribeRequest subRequest, MessageSeqId consumeSeqId, Callback callback, Object ctx) { if (fail) { callback.operationFailed(ctx, new PubSubException.ServiceDownException("Asked to fail")); return; } super.serveSubscribeRequest(topic, subRequest, consumeSeqId, callback, ctx); } } TestMMSubscriptionManager.java000066400000000000000000000241411244507361200400660ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.SynchronousQueue; import org.junit.Assert; import org.junit.Test; import org.junit.Before; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.MessageSeqId; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.protocol.PubSubProtocol.SubscriptionData; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.server.persistence.LocalDBPersistenceManager; import org.apache.hedwig.server.topics.TrivialOwnAllTopicManager; import org.apache.hedwig.server.meta.MetadataManagerFactory; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Either; import org.apache.hedwig.util.Callback; import org.apache.hedwig.zookeeper.ZooKeeperTestBase; public class TestMMSubscriptionManager extends ZooKeeperTestBase { MetadataManagerFactory mm; MMSubscriptionManager sm; ServerConfiguration cfg = new ServerConfiguration(); SynchronousQueue> subDataCallbackQueue = new SynchronousQueue>(); SynchronousQueue> BooleanCallbackQueue = new SynchronousQueue>(); Callback voidCallback; Callback subDataCallback; @Before @Override public void setUp() throws Exception { super.setUp(); cfg = new ServerConfiguration(); final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); mm = MetadataManagerFactory.newMetadataManagerFactory(cfg, zk); sm = new MMSubscriptionManager(cfg, mm, new TrivialOwnAllTopicManager(cfg, scheduler), LocalDBPersistenceManager.instance(), null, scheduler); subDataCallback = new Callback() { @Override public void operationFailed(Object ctx, final PubSubException exception) { scheduler.execute(new Runnable() { public void run() { ConcurrencyUtils.put(subDataCallbackQueue, Either.of((SubscriptionData) null, exception)); } }); } @Override public void operationFinished(Object ctx, final SubscriptionData resultOfOperation) { scheduler.execute(new Runnable() { public void run() { ConcurrencyUtils.put(subDataCallbackQueue, Either.of(resultOfOperation, (PubSubException) null)); } }); } }; voidCallback = new Callback() { @Override public void operationFailed(Object ctx, final PubSubException exception) { scheduler.execute(new Runnable() { public void run() { ConcurrencyUtils.put(BooleanCallbackQueue, Either.of((Boolean) null, exception)); } }); } @Override public void operationFinished(Object ctx, Void resultOfOperation) { scheduler.execute(new Runnable() { public void run() { ConcurrencyUtils.put(BooleanCallbackQueue, Either.of(true, (PubSubException) null)); } }); } }; } @Test(timeout=60000) public void testBasics() throws Exception { ByteString topic1 = ByteString.copyFromUtf8("topic1"); ByteString sub1 = ByteString.copyFromUtf8("sub1"); // // No topics acquired. // SubscribeRequest subRequest = SubscribeRequest.newBuilder().setSubscriberId(sub1).build(); MessageSeqId msgId = MessageSeqId.newBuilder().setLocalComponent(100).build(); sm.serveSubscribeRequest(topic1, subRequest, msgId, subDataCallback, null); Assert.assertEquals(ConcurrencyUtils.take(subDataCallbackQueue).right().getClass(), PubSubException.ServerNotResponsibleForTopicException.class); sm.unsubscribe(topic1, sub1, voidCallback, null); Assert.assertEquals(ConcurrencyUtils.take(BooleanCallbackQueue).right().getClass(), PubSubException.ServerNotResponsibleForTopicException.class); // // Acquire topic. // sm.acquiredTopic(topic1, voidCallback, null); Assert.assertTrue(BooleanCallbackQueue.take().left()); Assert.assertTrue(sm.top2sub2seq.containsKey(topic1)); Assert.assertEquals(0, sm.top2sub2seq.get(topic1).size()); sm.unsubscribe(topic1, sub1, voidCallback, null); Assert.assertEquals(ConcurrencyUtils.take(BooleanCallbackQueue).right().getClass(), PubSubException.ClientNotSubscribedException.class); // // Try to attach to a subscription. subRequest = SubscribeRequest.newBuilder().setCreateOrAttach(CreateOrAttach.ATTACH).setSubscriberId(sub1) .build(); sm.serveSubscribeRequest(topic1, subRequest, msgId, subDataCallback, null); Assert.assertEquals(ConcurrencyUtils.take(subDataCallbackQueue).right().getClass(), PubSubException.ClientNotSubscribedException.class); // now create subRequest = SubscribeRequest.newBuilder().setCreateOrAttach(CreateOrAttach.CREATE).setSubscriberId(sub1) .build(); sm.serveSubscribeRequest(topic1, subRequest, msgId, subDataCallback, null); Assert.assertEquals(msgId.getLocalComponent(), ConcurrencyUtils.take(subDataCallbackQueue).left().getState().getMsgId().getLocalComponent()); Assert.assertEquals(msgId.getLocalComponent(), sm.top2sub2seq.get(topic1).get(sub1).getLastConsumeSeqId() .getLocalComponent()); // try to create again sm.serveSubscribeRequest(topic1, subRequest, msgId, subDataCallback, null); Assert.assertEquals(ConcurrencyUtils.take(subDataCallbackQueue).right().getClass(), PubSubException.ClientAlreadySubscribedException.class); Assert.assertEquals(msgId.getLocalComponent(), sm.top2sub2seq.get(topic1).get(sub1).getLastConsumeSeqId() .getLocalComponent()); sm.lostTopic(topic1); sm.acquiredTopic(topic1, voidCallback, null); Assert.assertTrue(BooleanCallbackQueue.take().left()); // try to attach subRequest = SubscribeRequest.newBuilder().setCreateOrAttach(CreateOrAttach.ATTACH).setSubscriberId(sub1) .build(); MessageSeqId msgId1 = MessageSeqId.newBuilder().setLocalComponent(msgId.getLocalComponent() + 10).build(); sm.serveSubscribeRequest(topic1, subRequest, msgId1, subDataCallback, null); Assert.assertEquals(msgId.getLocalComponent(), subDataCallbackQueue.take().left().getState().getMsgId().getLocalComponent()); Assert.assertEquals(msgId.getLocalComponent(), sm.top2sub2seq.get(topic1).get(sub1).getLastConsumeSeqId() .getLocalComponent()); // now manipulate the consume ptrs // dont give it enough to have it persist to ZK MessageSeqId msgId2 = MessageSeqId.newBuilder().setLocalComponent( msgId.getLocalComponent() + cfg.getConsumeInterval() - 1).build(); sm.setConsumeSeqIdForSubscriber(topic1, sub1, msgId2, voidCallback, null); Assert.assertTrue(BooleanCallbackQueue.take().left()); Assert.assertEquals(msgId2.getLocalComponent(), sm.top2sub2seq.get(topic1).get(sub1).getLastConsumeSeqId() .getLocalComponent()); Assert.assertEquals(msgId.getLocalComponent(), sm.top2sub2seq.get(topic1).get(sub1).getSubscriptionState().getMsgId() .getLocalComponent()); // give it more so that it will write to ZK MessageSeqId msgId3 = MessageSeqId.newBuilder().setLocalComponent( msgId.getLocalComponent() + cfg.getConsumeInterval() + 1).build(); sm.setConsumeSeqIdForSubscriber(topic1, sub1, msgId3, voidCallback, null); Assert.assertTrue(BooleanCallbackQueue.take().left()); sm.lostTopic(topic1); sm.acquiredTopic(topic1, voidCallback, null); Assert.assertTrue(BooleanCallbackQueue.take().left()); Assert.assertEquals(msgId3.getLocalComponent(), sm.top2sub2seq.get(topic1).get(sub1).getLastConsumeSeqId() .getLocalComponent()); Assert.assertEquals(msgId3.getLocalComponent(), sm.top2sub2seq.get(topic1).get(sub1).getSubscriptionState().getMsgId() .getLocalComponent()); // finally unsubscribe sm.unsubscribe(topic1, sub1, voidCallback, null); Assert.assertTrue(BooleanCallbackQueue.take().left()); sm.lostTopic(topic1); sm.acquiredTopic(topic1, voidCallback, null); Assert.assertTrue(BooleanCallbackQueue.take().left()); Assert.assertFalse(sm.top2sub2seq.get(topic1).containsKey(sub1)); } } TestUpdateSubscriptionState.java000066400000000000000000000215171244507361200405110ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/subscriptions/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.subscriptions; import java.util.concurrent.SynchronousQueue; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.MessageHandler; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.protocol.PubSubProtocol.Message; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.server.HedwigHubTestBase; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.HedwigSocketAddress; import org.junit.After; import org.junit.Before; import org.junit.Test; import com.google.protobuf.ByteString; public class TestUpdateSubscriptionState extends HedwigHubTestBase { private static final int RETENTION_SECS_VALUE = 100; // Client side variables protected HedwigClient client; protected Publisher publisher; protected Subscriber subscriber; // SynchronousQueues to verify async calls private final SynchronousQueue queue = new SynchronousQueue(); // Test implementation of subscriber's message handler class OrderCheckingMessageHandler implements MessageHandler { ByteString topic; ByteString subscriberId; int startMsgId; int numMsgs; int endMsgId; boolean inOrder = true; OrderCheckingMessageHandler(ByteString topic, ByteString subscriberId, int startMsgId, int numMsgs) { this.topic = topic; this.subscriberId = subscriberId; this.startMsgId = startMsgId; this.numMsgs = numMsgs; this.endMsgId = startMsgId + numMsgs - 1; } @Override public void deliver(ByteString thisTopic, ByteString thisSubscriberId, Message msg, Callback callback, Object context) { if (!topic.equals(thisTopic) || !subscriberId.equals(thisSubscriberId)) { return; } // check order int msgId = Integer.parseInt(msg.getBody().toStringUtf8()); if (logger.isDebugEnabled()) { logger.debug("Received message : " + msgId); } if (inOrder) { if (startMsgId != msgId) { logger.error("Expected message " + startMsgId + ", but received message " + msgId); inOrder = false; } else { ++startMsgId; } } callback.operationFinished(context, null); if (msgId == endMsgId) { new Thread(new Runnable() { @Override public void run() { if (logger.isDebugEnabled()) { logger.debug("Deliver finished!"); } ConcurrencyUtils.put(queue, true); } }).start(); } } public boolean isInOrder() { return inOrder; } } public TestUpdateSubscriptionState() { super(1); } protected class NewHubServerConfiguration extends HubServerConfiguration { public NewHubServerConfiguration(int serverPort, int sslServerPort) { super(serverPort, sslServerPort); } @Override public int getRetentionSecs() { return RETENTION_SECS_VALUE; } } @Override protected ServerConfiguration getServerConfiguration(int serverPort, int sslServerPort) { return new NewHubServerConfiguration(serverPort, sslServerPort); } protected class TestClientConfiguration extends HubClientConfiguration { @Override public boolean isAutoSendConsumeMessageEnabled() { return true; } } @Override @Before public void setUp() throws Exception { super.setUp(); client = new HedwigClient(new TestClientConfiguration()); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } @Override @After public void tearDown() throws Exception { client.close(); super.tearDown(); } @Test(timeout=60000) public void testConsumeWhenTopicRelease() throws Exception { ByteString topic = ByteString.copyFromUtf8("TestConsumeWhenTopicRelease"); ByteString subId = ByteString.copyFromUtf8("mysub"); int startMsgId = 0; int numMsgs = 10; // subscriber in client subscriber.subscribe(topic, subId, CreateOrAttach.CREATE_OR_ATTACH); // start delivery OrderCheckingMessageHandler ocm = new OrderCheckingMessageHandler( topic, subId, startMsgId, numMsgs); subscriber.startDelivery(topic, subId, ocm); for (int i=0; i cb, Object ctx) { if (shouldError) { cb.operationFailed(ctx, new PubSubException.ServiceDownException("Asked to fail")); return; } if (topics.contains(topic) // already own it || shouldOwnEveryNewTopic) { super.realGetOwner(topic, shouldClaim, cb, ctx); return; } else { // return some other address cb.operationFinished(ctx, new HedwigSocketAddress("124.31.0.1:80")); } } } TestConcurrentTopicAcquisition.java000066400000000000000000000174051244507361200376060ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.SynchronousQueue; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import org.apache.hedwig.client.conf.ClientConfiguration; import org.apache.hedwig.client.HedwigClient; import org.apache.hedwig.client.api.Publisher; import org.apache.hedwig.client.api.Subscriber; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.protocol.PubSubProtocol.SubscribeRequest.CreateOrAttach; import org.apache.hedwig.server.HedwigHubTestBase; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; import org.junit.Test; import com.google.protobuf.ByteString; public class TestConcurrentTopicAcquisition extends HedwigHubTestBase { // Client variables protected HedwigClient client; protected Publisher publisher; protected Subscriber subscriber; final LinkedBlockingQueue subscribers = new LinkedBlockingQueue(); final ByteString topic = ByteString.copyFromUtf8("concurrent-topic"); final int numSubscribers = 300; final AtomicInteger numDone = new AtomicInteger(0); // SynchronousQueues to verify async calls private final SynchronousQueue queue = new SynchronousQueue(); class SubCallback implements Callback { ByteString subId; public SubCallback(ByteString subId) { this.subId = subId; } @Override public void operationFinished(Object ctx, Void resultOfOperation) { if (logger.isDebugEnabled()) { logger.debug("subscriber " + subId.toStringUtf8() + " succeed."); } int done = numDone.incrementAndGet(); if (done == numSubscribers) { ConcurrencyUtils.put(queue, false); } } @Override public void operationFailed(Object ctx, PubSubException exception) { if (logger.isDebugEnabled()) { logger.debug("subscriber " + subId.toStringUtf8() + " failed : ", exception); } ConcurrencyUtils.put(subscribers, subId); // ConcurrencyUtils.put(queue, false); } } @Override public void setUp() throws Exception { super.setUp(); client = new HedwigClient(new HubClientConfiguration()); publisher = client.getPublisher(); subscriber = client.getSubscriber(); } @Override public void tearDown() throws Exception { // sub.interrupt(); // sub.join(); client.close(); super.tearDown(); } @Test(timeout=60000) public void testTopicAcquistion() throws Exception { logger.info("Start concurrent topic acquistion test."); // let one bookie down to cause not enough bookie exception logger.info("Tear down one bookie server."); bktb.tearDownOneBookieServer(); // In current implementation, the first several subscriptions will succeed to put topic in topic manager set, // because the tear down bookie server's zk node need time to disappear // some subscriptions will create ledger successfully, then other subscriptions will fail. // the race condition will be: topic manager own topic but persistence manager doesn't // 300 subscribers subscribe to a same topic final AtomicBoolean inRedirectLoop = new AtomicBoolean(false); numDone.set(0); for (int i=0; i() { private void tick() { if (numDone.incrementAndGet() == numSubscribers) { ConcurrencyUtils.put(queue, true); } } @Override public void operationFinished(Object ctx, Void resultOfOperation) { tick(); } @Override public void operationFailed(Object ctx, PubSubException exception) { if (exception instanceof PubSubException.ServiceDownException) { String msg = exception.getMessage(); if (msg.indexOf("ServerRedirectLoopException") > 0) { inRedirectLoop.set(true); } if (logger.isDebugEnabled()) { logger.debug("Operation failed : ", exception); } } tick(); } }, null); } queue.take(); // TODO: remove comment after we fix the issue // Assert.assertEquals(false, inRedirectLoop.get()); // start a thread to send subscriptions numDone.set(0); Thread sub = new Thread(new Runnable() { @Override public void run() { logger.info("sub thread started"); try { // 100 subscribers subscribe to a same topic for (int i=0; i implements Callback { SynchronousQueue> q = new SynchronousQueue>(); public SynchronousQueue> getQueue() { return q; } public Either take() throws InterruptedException { return q.take(); } @Override public void operationFailed(Object ctx, final PubSubException exception) { LOG.error("got exception: " + exception); new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(q, Either.of((T) null, (Exception) exception)); } }).start(); } @Override public void operationFinished(Object ctx, final T resultOfOperation) { new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(q, Either.of(resultOfOperation, (Exception) null)); } }).start(); } } protected CallbackQueue addrCbq = new CallbackQueue(); protected CallbackQueue bsCbq = new CallbackQueue(); protected CallbackQueue voidCbq = new CallbackQueue(); protected ByteString topic = ByteString.copyFromUtf8("topic"); protected HedwigSocketAddress me; protected ScheduledExecutorService scheduler; public TestMMTopicManager(String metaManagerCls) { super(metaManagerCls); } @Override @Before public void setUp() throws Exception { super.setUp(); me = conf.getServerAddr(); scheduler = Executors.newSingleThreadScheduledExecutor(); tom = metadataManagerFactory.newTopicOwnershipManager(); tm = new MMTopicManager(conf, zk, metadataManagerFactory, scheduler); } @Override @After public void tearDown() throws Exception { tom.close(); tm.stop(); super.tearDown(); } @Test(timeout=60000) public void testGetOwnerSingle() throws Exception { tm.getOwner(topic, false, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); } protected ByteString mkTopic(int i) { return ByteString.copyFromUtf8(topic.toStringUtf8() + i); } protected T check(Either ex) throws Exception { if (ex.left() == null) throw ex.right(); else return ex.left(); } public static class CustomServerConfiguration extends ServerConfiguration { int port; public CustomServerConfiguration(int port) { this.port = port; } @Override public int getServerPort() { return port; } } @Test(timeout=60000) public void testGetOwnerMulti() throws Exception { ServerConfiguration conf1 = new CustomServerConfiguration(conf.getServerPort() + 1), conf2 = new CustomServerConfiguration(conf.getServerPort() + 2); MMTopicManager tm1 = new MMTopicManager(conf1, zk, metadataManagerFactory, scheduler), tm2 = new MMTopicManager(conf2, zk, metadataManagerFactory, scheduler); tm.getOwner(topic, false, addrCbq, null); HedwigSocketAddress owner = check(addrCbq.take()); for (int i = 0; i < 100; ++i) { tm.getOwner(topic, false, addrCbq, null); Assert.assertEquals(owner, check(addrCbq.take())); tm1.getOwner(topic, false, addrCbq, null); Assert.assertEquals(owner, check(addrCbq.take())); tm2.getOwner(topic, false, addrCbq, null); Assert.assertEquals(owner, check(addrCbq.take())); } for (int i = 0; i < 100; ++i) { if (!owner.equals(me)) break; tm.getOwner(mkTopic(i), false, addrCbq, null); owner = check(addrCbq.take()); if (i == 99) Assert.fail("Never chose another owner"); } tm1.stop(); tm2.stop(); } @Test(timeout=60000) public void testLoadBalancing() throws Exception { tm.getOwner(topic, false, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); ServerConfiguration conf1 = new CustomServerConfiguration(conf.getServerPort() + 1); TopicManager tm1 = new MMTopicManager(conf1, zk, metadataManagerFactory, scheduler); ByteString topic1 = mkTopic(1); tm.getOwner(topic1, false, addrCbq, null); Assert.assertEquals(conf1.getServerAddr(), check(addrCbq.take())); tm1.stop(); } class StubOwnershipChangeListener implements TopicOwnershipChangeListener { boolean failure; SynchronousQueue> bsQueue; public StubOwnershipChangeListener(SynchronousQueue> bsQueue) { this.bsQueue = bsQueue; } public void setFailure(boolean failure) { this.failure = failure; } @Override public void lostTopic(final ByteString topic) { new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(bsQueue, Pair.of(topic, false)); } }).start(); } public void acquiredTopic(final ByteString topic, final Callback callback, final Object ctx) { new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(bsQueue, Pair.of(topic, true)); if (failure) { callback.operationFailed(ctx, new PubSubException.ServiceDownException("Asked to fail")); } else { callback.operationFinished(ctx, null); } } }).start(); } } @Test(timeout=60000) public void testOwnershipChange() throws Exception { SynchronousQueue> bsQueue = new SynchronousQueue>(); StubOwnershipChangeListener listener = new StubOwnershipChangeListener(bsQueue); tm.addTopicOwnershipChangeListener(listener); // regular acquire tm.getOwner(topic, true, addrCbq, null); Pair pair = bsQueue.take(); Assert.assertEquals(topic, pair.first()); Assert.assertTrue(pair.second()); Assert.assertEquals(me, check(addrCbq.take())); assertOwnershipNodeExists(); // topic that I already own tm.getOwner(topic, true, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); Assert.assertTrue(bsQueue.isEmpty()); assertOwnershipNodeExists(); // regular release tm.releaseTopic(topic, cb, null); pair = bsQueue.take(); Assert.assertEquals(topic, pair.first()); Assert.assertFalse(pair.second()); Assert.assertTrue(queue.take()); assertOwnershipNodeDoesntExist(); // releasing topic that I don't own tm.releaseTopic(mkTopic(0), cb, null); Assert.assertTrue(queue.take()); Assert.assertTrue(bsQueue.isEmpty()); // set listener to return error listener.setFailure(true); tm.getOwner(topic, true, addrCbq, null); pair = bsQueue.take(); Assert.assertEquals(topic, pair.first()); Assert.assertTrue(pair.second()); Assert.assertEquals(PubSubException.ServiceDownException.class, ((CompositeException) addrCbq.take().right()) .getExceptions().iterator().next().getClass()); Assert.assertFalse(tm.topics.contains(topic)); Thread.sleep(100); assertOwnershipNodeDoesntExist(); } public void assertOwnershipNodeExists() throws Exception { StubCallback> callback = new StubCallback>(); tom.readOwnerInfo(topic, callback, null); Versioned hubInfo = callback.queue.take().left(); Assert.assertEquals(tm.addr, hubInfo.getValue().getAddress()); } public void assertOwnershipNodeDoesntExist() throws Exception { StubCallback> callback = new StubCallback>(); tom.readOwnerInfo(topic, callback, null); Versioned hubInfo = callback.queue.take().left(); Assert.assertEquals(null, hubInfo); } @Test(timeout=60000) public void testZKClientDisconnected() throws Exception { // First assert ownership of the topic tm.getOwner(topic, true, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); // Suspend the ZKTopicManager and make sure calls to getOwner error out tm.isSuspended = true; tm.getOwner(topic, true, addrCbq, null); Assert.assertEquals(PubSubException.ServiceDownException.class, addrCbq.take().right().getClass()); // Release the topic. This should not error out even if suspended. tm.releaseTopic(topic, cb, null); Assert.assertTrue(queue.take()); assertOwnershipNodeDoesntExist(); // Restart the ZKTopicManager and make sure calls to getOwner are okay tm.isSuspended = false; tm.getOwner(topic, true, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); assertOwnershipNodeExists(); } } TestZkTopicManager.java000066400000000000000000000272571244507361200351400ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/server/topics/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.server.topics; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.SynchronousQueue; import org.apache.zookeeper.KeeperException; import org.junit.After; import org.junit.Assert; import org.junit.Before; import org.junit.Test; import com.google.protobuf.ByteString; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.exceptions.PubSubException.CompositeException; import org.apache.hedwig.server.common.ServerConfiguration; import org.apache.hedwig.util.Callback; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Either; import org.apache.hedwig.util.HedwigSocketAddress; import org.apache.hedwig.util.Pair; import org.apache.hedwig.zookeeper.ZooKeeperTestBase; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class TestZkTopicManager extends ZooKeeperTestBase { static Logger LOG = LoggerFactory.getLogger(TestZkTopicManager.class); protected ZkTopicManager tm; protected class CallbackQueue implements Callback { SynchronousQueue> q = new SynchronousQueue>(); public SynchronousQueue> getQueue() { return q; } public Either take() throws InterruptedException { return q.take(); } @Override public void operationFailed(Object ctx, final PubSubException exception) { LOG.error("got exception: " + exception); new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(q, Either.of((T) null, (Exception) exception)); } }).start(); } @Override public void operationFinished(Object ctx, final T resultOfOperation) { new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(q, Either.of(resultOfOperation, (Exception) null)); } }).start(); } } protected CallbackQueue addrCbq = new CallbackQueue(); protected CallbackQueue bsCbq = new CallbackQueue(); protected CallbackQueue voidCbq = new CallbackQueue(); protected ByteString topic = ByteString.copyFromUtf8("topic"); protected ServerConfiguration cfg; protected HedwigSocketAddress me; protected ScheduledExecutorService scheduler; @Override @Before public void setUp() throws Exception { super.setUp(); cfg = new ServerConfiguration(); me = cfg.getServerAddr(); scheduler = Executors.newSingleThreadScheduledExecutor(); tm = new ZkTopicManager(zk, cfg, scheduler); } @Override @After public void tearDown() throws Exception { tm.stop(); super.tearDown(); } @Test(timeout=60000) public void testGetOwnerSingle() throws Exception { tm.getOwner(topic, false, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); } protected ByteString mkTopic(int i) { return ByteString.copyFromUtf8(topic.toStringUtf8() + i); } protected T check(Either ex) throws Exception { if (ex.left() == null) throw ex.right(); else return ex.left(); } public static class CustomServerConfiguration extends ServerConfiguration { int port; public CustomServerConfiguration(int port) { this.port = port; } @Override public int getServerPort() { return port; } } @Test(timeout=60000) public void testGetOwnerMulti() throws Exception { ServerConfiguration cfg1 = new CustomServerConfiguration(cfg.getServerPort() + 1), cfg2 = new CustomServerConfiguration( cfg.getServerPort() + 2); // TODO change cfg1 cfg2 params ZkTopicManager tm1 = new ZkTopicManager(zk, cfg1, scheduler), tm2 = new ZkTopicManager(zk, cfg2, scheduler); tm.getOwner(topic, false, addrCbq, null); HedwigSocketAddress owner = check(addrCbq.take()); // If we were told to have another person claim the topic, make them // claim the topic. if (owner.getPort() == cfg1.getServerPort()) tm1.getOwner(topic, true, addrCbq, null); else if (owner.getPort() == cfg2.getServerPort()) tm2.getOwner(topic, true, addrCbq, null); if (owner.getPort() != cfg.getServerPort()) Assert.assertEquals(owner, check(addrCbq.take())); for (int i = 0; i < 100; ++i) { tm.getOwner(topic, false, addrCbq, null); Assert.assertEquals(owner, check(addrCbq.take())); tm1.getOwner(topic, false, addrCbq, null); Assert.assertEquals(owner, check(addrCbq.take())); tm2.getOwner(topic, false, addrCbq, null); Assert.assertEquals(owner, check(addrCbq.take())); } // Give us 100 chances to choose another owner if not shouldClaim. for (int i = 0; i < 100; ++i) { if (!owner.equals(me)) break; tm.getOwner(mkTopic(i), false, addrCbq, null); owner = check(addrCbq.take()); if (i == 99) Assert.fail("Never chose another owner"); } // Make sure we always choose ourselves if shouldClaim. for (int i = 0; i < 100; ++i) { tm.getOwner(mkTopic(100), true, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); } tm1.stop(); tm2.stop(); } @Test(timeout=60000) public void testLoadBalancing() throws Exception { tm.getOwner(topic, false, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); ServerConfiguration cfg1 = new CustomServerConfiguration(cfg.getServerPort() + 1); TopicManager tm1 = new ZkTopicManager(zk, cfg1, scheduler); ByteString topic1 = mkTopic(1); tm.getOwner(topic1, false, addrCbq, null); Assert.assertEquals(cfg1.getServerAddr(), check(addrCbq.take())); tm1.stop(); } class StubOwnershipChangeListener implements TopicOwnershipChangeListener { boolean failure; SynchronousQueue> bsQueue; public StubOwnershipChangeListener(SynchronousQueue> bsQueue) { this.bsQueue = bsQueue; } public void setFailure(boolean failure) { this.failure = failure; } @Override public void lostTopic(final ByteString topic) { new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(bsQueue, Pair.of(topic, false)); } }).start(); } public void acquiredTopic(final ByteString topic, final Callback callback, final Object ctx) { new Thread(new Runnable() { @Override public void run() { ConcurrencyUtils.put(bsQueue, Pair.of(topic, true)); if (failure) { callback.operationFailed(ctx, new PubSubException.ServiceDownException("Asked to fail")); } else { callback.operationFinished(ctx, null); } } }).start(); } } @Test(timeout=60000) public void testOwnershipChange() throws Exception { SynchronousQueue> bsQueue = new SynchronousQueue>(); StubOwnershipChangeListener listener = new StubOwnershipChangeListener(bsQueue); tm.addTopicOwnershipChangeListener(listener); // regular acquire tm.getOwner(topic, true, addrCbq, null); Pair pair = bsQueue.take(); Assert.assertEquals(topic, pair.first()); Assert.assertTrue(pair.second()); Assert.assertEquals(me, check(addrCbq.take())); assertOwnershipNodeExists(); // topic that I already own tm.getOwner(topic, true, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); Assert.assertTrue(bsQueue.isEmpty()); assertOwnershipNodeExists(); // regular release tm.releaseTopic(topic, cb, null); pair = bsQueue.take(); Assert.assertEquals(topic, pair.first()); Assert.assertFalse(pair.second()); Assert.assertTrue(queue.take()); assertOwnershipNodeDoesntExist(); // releasing topic that I don't own tm.releaseTopic(mkTopic(0), cb, null); Assert.assertTrue(queue.take()); Assert.assertTrue(bsQueue.isEmpty()); // set listener to return error listener.setFailure(true); tm.getOwner(topic, true, addrCbq, null); pair = bsQueue.take(); Assert.assertEquals(topic, pair.first()); Assert.assertTrue(pair.second()); Assert.assertEquals(PubSubException.ServiceDownException.class, ((CompositeException) addrCbq.take().right()) .getExceptions().iterator().next().getClass()); Assert.assertFalse(tm.topics.contains(topic)); Thread.sleep(100); assertOwnershipNodeDoesntExist(); } public void assertOwnershipNodeExists() throws Exception { byte[] data = zk.getData(tm.hubPath(topic), false, null); Assert.assertEquals(HubInfo.parse(new String(data)).getAddress(), tm.addr); } public void assertOwnershipNodeDoesntExist() throws Exception { try { zk.getData(tm.hubPath(topic), false, null); Assert.assertTrue(false); } catch (KeeperException e) { Assert.assertEquals(e.code(), KeeperException.Code.NONODE); } } @Test(timeout=60000) public void testZKClientDisconnected() throws Exception { // First assert ownership of the topic tm.getOwner(topic, true, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); // Suspend the ZKTopicManager and make sure calls to getOwner error out tm.isSuspended = true; tm.getOwner(topic, true, addrCbq, null); Assert.assertEquals(PubSubException.ServiceDownException.class, addrCbq.take().right().getClass()); // Release the topic. This should not error out even if suspended. tm.releaseTopic(topic, cb, null); Assert.assertTrue(queue.take()); assertOwnershipNodeDoesntExist(); // Restart the ZKTopicManager and make sure calls to getOwner are okay tm.isSuspended = false; tm.getOwner(topic, true, addrCbq, null); Assert.assertEquals(me, check(addrCbq.take())); assertOwnershipNodeExists(); } } bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/zookeeper/000077500000000000000000000000001244507361200300155ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/zookeeper/TestZkUtils.java000066400000000000000000000031561244507361200331320ustar00rootroot00000000000000/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.zookeeper; import java.util.Arrays; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.ZooDefs.Ids; import org.junit.Assert; import org.junit.Test; public class TestZkUtils extends ZooKeeperTestBase { @Test(timeout=60000) public void testCreateFullPathOptimistic() throws Exception { testPath("/a/b/c", CreateMode.EPHEMERAL); testPath("/b/c/d", CreateMode.PERSISTENT); testPath("/b/c/d/e", CreateMode.PERSISTENT); } void testPath(String path, CreateMode mode) throws Exception { byte[] data = new byte[] { 77 }; ZkUtils.createFullPathOptimistic(zk, path, data, Ids.OPEN_ACL_UNSAFE, mode, strCb, null); Assert.assertTrue(queue.take()); Assert.assertTrue(Arrays.equals(data, zk.getData(path, false, null))); } } ZooKeeperTestBase.java000066400000000000000000000057271244507361200341520ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/java/org/apache/hedwig/zookeeper/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hedwig.zookeeper; import java.util.concurrent.SynchronousQueue; import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.KeeperException.Code; import org.apache.zookeeper.test.ClientBase; import org.junit.After; import org.junit.Before; import org.apache.hedwig.exceptions.PubSubException; import org.apache.hedwig.util.ConcurrencyUtils; import org.apache.hedwig.util.Callback; import org.apache.bookkeeper.test.PortManager; /** * This is a base class for any tests that need a ZooKeeper client/server setup. * */ public abstract class ZooKeeperTestBase extends ClientBase { protected ZooKeeper zk; protected SynchronousQueue queue = new SynchronousQueue(); protected Callback cb = new Callback() { @Override public void operationFinished(Object ctx, Void result) { new Thread(new Runnable() { public void run() { ConcurrencyUtils.put(queue, true); } }).start(); } @Override public void operationFailed(Object ctx, PubSubException exception) { new Thread(new Runnable() { public void run() { ConcurrencyUtils.put(queue, false); } }).start(); } }; protected AsyncCallback.StringCallback strCb = new AsyncCallback.StringCallback() { @Override public void processResult(int rc, String path, Object ctx, String name) { ConcurrencyUtils.put(queue, rc == Code.OK.intValue()); } }; protected AsyncCallback.VoidCallback voidCb = new AsyncCallback.VoidCallback() { @Override public void processResult(int rc, String path, Object ctx) { ConcurrencyUtils.put(queue, rc == Code.OK.intValue()); } }; @Override @Before public void setUp() throws Exception { hostPort = "127.0.0.1:" + PortManager.nextFreePort(); super.setUp(); zk = createClient(); } @Override @After public void tearDown() throws Exception { super.tearDown(); zk.close(); } } bookkeeper-release-4.2.4/hedwig-server/src/test/resources/000077500000000000000000000000001244507361200236245ustar00rootroot00000000000000bookkeeper-release-4.2.4/hedwig-server/src/test/resources/log4j.properties000066400000000000000000000052141244507361200267630ustar00rootroot00000000000000# # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # # # Hedwig Logging Configuration # # Format is " (, )+ # DEFAULT: console appender only log4j.rootLogger=INFO, CONSOLE # Example with rolling log file #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE # Example with rolling log file and tracing #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE log4j.logger.org.apache.zookeeper=ERROR # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=INFO log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # # Add ROLLINGFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender log4j.appender.ROLLINGFILE.Threshold=DEBUG log4j.appender.ROLLINGFILE.File=hedwig-server.log log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n # Max log file size of 10MB log4j.appender.ROLLINGFILE.MaxFileSize=10MB # uncomment the next line to limit number of backup files #log4j.appender.ROLLINGFILE.MaxBackupIndex=10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=hedwig_trace.log log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n bookkeeper-release-4.2.4/pom.xml000066400000000000000000000151271244507361200166140ustar00rootroot00000000000000 org.apache apache 9 4.0.0 org.apache.bookkeeper 4.2.4 bookkeeper pom bookkeeper 2011 compat-deps hedwig-client hedwig-server hedwig-protocol bookkeeper-server bookkeeper-benchmark UTF-8 UTF-8 2.4.1 13.0.1 http://zookeeper.apache.org/bookkeeper org.codehaus.mojo findbugs-maven-plugin 2.3.2 maven-compiler-plugin 2.3.2 1.6 1.6 org.apache.maven.plugins maven-surefire-plugin 2.9 -Xmx1G -Djava.net.preferIPv4Stack=true true always 1800 org.apache.maven.plugins maven-javadoc-plugin 2.8 -exclude org.apache.hedwig.client.netty:org.apache.hedwig.client.benchmark:org.apache.hedwig.client.data:org.apache.hedwig.client.exceptions:org.apache.hedwig.client.handlers:org.apache.hedwig.client.ssl org.apache.bookkeeper.client:org.apache.bookkeeper.conf:org.apache.hedwig.client:org.apache.hedwig.util:org.apache.hedwig.protocol:org.apache.hedwig.exceptions Bookkeeper org.apache.bookkeeper* Hedwig org.apache.hedwig* aggregate aggregate site maven-assembly-plugin 2.2.1 gnu src/assemble/src.xml org.apache.rat apache-rat-plugin 0.7 .git/**/* **/.svn/**/* CHANGES.txt **/README **/apidocs/* test-patch/**/* org.apache.maven.plugins maven-jxr-plugin 2.1 org.apache.maven.plugins maven-pmd-plugin 2.3 true 1.6 deploy org.apache.maven.plugins maven-gpg-plugin true sign-artifacts verify sign true true bookkeeper-release-4.2.4/src/000077500000000000000000000000001244507361200160605ustar00rootroot00000000000000bookkeeper-release-4.2.4/src/assemble/000077500000000000000000000000001244507361200176535ustar00rootroot00000000000000bookkeeper-release-4.2.4/src/assemble/bin.xml000066400000000000000000000051671244507361200211560ustar00rootroot00000000000000 bin tar.gz true target / ${project.artifactId}-${project.version}.jar conf bin 755 644 ${basedir}/*.txt ../CHANGES.txt / 644 ../README / 644 ${basedir}/src/main/resources/LICENSE.bin.txt / LICENSE 644 ${basedir}/src/main/resources/NOTICE.bin.txt / NOTICE 644 /lib false runtime false bookkeeper-release-4.2.4/src/assemble/src.xml000066400000000000000000000046711244507361200211740ustar00rootroot00000000000000 src tar.gz true . true **/README **/LICENSE **/NOTICE **/pom.xml **/src/** **/conf/** **/bin/** **/*.txt doc/* .git/** **/.gitignore **/.svn **/*.iws **/*.ipr **/*.iml **/.classpath **/.project **/.settings **/target/** **/*.log **/build/** **/file:/** **/SecurityAuth.audit* target/site/apidocs doc/apidocs