Skip to content

Commit

Permalink
Put group of entries together to journal queue copy 4 (#8)
Browse files Browse the repository at this point in the history
* Fix memory leak issue of reading small entries (apache#3844)

* Make read entry request recyclable (apache#3842)

* Make read entry request recyclable

* Move recycle to finally block

* Fix test and comments

* Fix test

* Avoid unnecessary force write. (apache#3847)

* Avoid unnecessary force write.

* code clean.

* fix style

* Correct the running job name for the test group (apache#3851)

---

### Motivation

The running tests job name doesn't match the tests. Correct
the job name.

* add timeout for two flaky timeout tests (apache#3855)

* add V2 protocal and warmupMessages support for benchMark (apache#3856)

* disable trimStackTrack for code-coverage profile (apache#3854)

* Fix bkperf log directory not found (apache#3858)

### Motivation
When using the bkperf command `bin/bkperf journal append -j data -n 100000000 --sync true` to test the BookKeeper journal performance, it failed with the following exception
```
[0.002s][error][logging] Error opening log file '/Users/hangc/Downloads/tmp/tc/batch/ta/bookkeeper-all-4.16.0-SNAPSHOT/logs/bkperf-gc.log': No such file or directory
[0.002s][error][logging] Initialization of output 'file=/Users/hangc/Downloads/tmp/tc/batch/ta/bookkeeper-all-4.16.0-SNAPSHOT/logs/bkperf-gc.log' using options 'filecount=5,filesize=64m' failed.
Invalid -Xlog option '-Xlog:gc=info:file=/Users/hangc/Downloads/tmp/tc/batch/ta/bookkeeper-all-4.16.0-SNAPSHOT/logs/bkperf-gc.log::filecount=5,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
```

The root cause is that the `logs` directory was not created.

### Modifications
Create the `logs` directory before bkperf started.

* [improve] Fix indexDirs upgrade failed (apache#3762)

* fix indexDirs upgrade failed

* Bump checkstyle-plugin from 3.1.2 to 3.2.1 (apache#3850)

* [Flaky] Fix flaky test in testRaceGuavaEvictAndReleaseBeforeRetain (apache#3857)

* Fix flaky test in testRaceGuavaEvictAndReleaseBeforeRetain

* format code

* Fix NPE in BenchThroughputLatency (apache#3859)

* Update website to 4.15.4 (apache#3862)

---

### Motivation

Update website to 4.15.4

* change rocksDB config level_compaction_dynamic_level_bytes to CFOptions (apache#3860)

### Motivation
After PR apache#3056 , Bookkeeper set `level_compaction_dynamic_level_bytes=true` as `TableOptions` in `entry_location_rocksdb.conf.default` , which will cause `level_compaction_dynamic_level_bytes` lose efficacy and will cause rocksDB .sst file compact sort chaos when update bookie release.
As RocksDB  conf, `level_compaction_dynamic_level_bytes` need set as `CFOptions` https://github.com/facebook/rocksdb/blob/master/examples/rocksdb_option_file_example.ini

<img width="703" alt="image" src="https://user-images.githubusercontent.com/84127069/224640399-d5481fe5-7b75-4229-ac06-3d280aa9ae6d.png">


<img width="240" alt="image" src="https://user-images.githubusercontent.com/84127069/224640621-737d0a42-4e01-4f38-bd5a-862a93bc4b32.png">

### Changes

1. Change `level_compaction_dynamic_level_bytes=true` from `TableOptions` to `CFOptions`  in `entry_location_rocksdb.conf.default` ;

* Correct the running job flag for the test group. (apache#3865)

* Release note for 4.15.4 (apache#3831)

---

### Motivation

Release note for 4.15.4

* Add trigger entry location index rocksDB compact interface. (apache#3802)

### Motivation
After the bookie instance running long time, the bookie entry location index rocksDB `.sst` file size maybe expand to 20-30GB as one ledger data dir's location index in some case, which will cause the rocksDB scan operator cost more time and cause the bookie client request timeout.

Add trigger entry location index rocksDB compact REST API which can trigger  entry location rocksDB compaction and get the compaction status. 

The full range entry location index rocksDB compact will cause the entry location index dir express higher IOUtils. So we'd better trigger the entry location rocksDB compact by the api in low data flow period.

**Some case before rocksDB compact:**
<img width="232" alt="image" src="https://user-images.githubusercontent.com/84127069/220893469-e6fbc1a3-c767-4ffe-8ae9-f05ad1833c50.png">


<img width="288" alt="image" src="https://user-images.githubusercontent.com/84127069/220891359-dc37e139-37b0-461b-8001-dcc48517366c.png">

**After rocksDB compact:**
<img width="255" alt="image" src="https://user-images.githubusercontent.com/84127069/220891419-24267fa7-348c-4fbd-8b3e-70a99840bce5.png">

### Changes
1. Add  REST API  to trigger entry location index rocksDB compact.

* Pick the higher leak detection level between netty and bookkeeper. (apache#3794)

### Motivation
1. Pick the higher leak detection level between netty and bookkeeper.
2. Enhance the bookkeeper leak detection value match rule, now it's case insensitive.

There are detailed information about it: https://lists.apache.org/thread/d3zw8bxhlg0wxfhocyjglq0nbxrww3sg

* Disable code coverage and codecov report (apache#3863)

### Motivation

There're two reasons that we want to disable the code coverage.

1. The current report result is not accurate.
2. We can't get the PR's unit test's code coverage because of the apache Codecov permission.

* Add small files check in garbage collection (apache#3631)

### Motivation
When we use `TransactionalEntryLogCompactor` to compact the entry log files, it will generate a lot of small entry log files, and for those files, the file usage is usually greater than 90%, which can not be compacted unless the file usage decreased.

![image](https://user-images.githubusercontent.com/5436568/201135615-4d6072f5-e353-483d-9afb-48fad8134044.png)


### Changes
We introduce the entry log file size check during compaction, and the checker is controlled by `gcEntryLogSizeRatio`. 
If the total entry log file size is less than `gcEntryLogSizeRatio * logSizeLimit`, the entry log file will be compacted even though the file usage is greater than 90%. This feature is disabled by default and the `gcEntryLogSizeRatio` default value is `0.0`

* [improvement] Delay all audit task when have a already delayed bookie check task (apache#3818)

### Motivation

Fixes apache#3817 

For details, see: apache#3817 

### Changes

When there is an `auditTask` during the `lostBookieRecoveryDelay` delay, other detection tasks should be skipped.

* Change order of doGcLedgers and extractMetaFromEntryLogs (apache#3869)

* [Bugfix] make metadataDriver initialization more robust (apache#3873)

Co-authored-by: zengqiang.xu <[email protected]>

* Enable CI for the streamstorage python client (apache#3875)

* Fix compaction threshold default value precision problem. (apache#3871)

* Fix compaction threshold precision problem.

* Fix compaction threshold precision problem.

* Single buffer for small add requests (apache#3783)

* Single buffer for small add requests

* Fixed checkstyle

* Fixed treating of ComposityByteBuf

* Fixed merge issues

* Fixed merge issues

* WIP

* Fixed test and removed dead code

* Removed unused import

* Fixed BookieJournalTest

* removed unused import

* fix the checkstyle

* fix failed test

* fix failed test

---------

Co-authored-by: chenhang <[email protected]>

* Add log for entry log file delete. (apache#3872)

* Add log for entry log file delete.

* add log info.

* Address the comment.

* Address the comment.

* revert the code.

* Improve group and flush add-responses after journal sync (apache#3848)

Descriptions of the changes in this PR:
This is an improvement for apache#3837

### Motivation
1. Now if the maxPendingResponsesSize is expanded large, it will not decrease. => We should make it flexible.
2. Now after prepareSendResponseV2 to the channel, then we trigger all channels to flush pendingSendResponses, maybe there is only a few channels that need to flush, but if we trigger all channels, it's a waste. => We only flush the channel which prepareSendResponseV2.

---------

Co-authored-by: Penghui Li <[email protected]>
Co-authored-by: Yong Zhang <[email protected]>
Co-authored-by: Hang Chen <[email protected]>
Co-authored-by: wenbingshen <[email protected]>
Co-authored-by: ZhangJian He <[email protected]>
Co-authored-by: lixinyang <[email protected]>
Co-authored-by: YANGLiiN <[email protected]>
Co-authored-by: Lishen Yao <[email protected]>
Co-authored-by: Andrey Yegorov <[email protected]>
Co-authored-by: ZanderXu <[email protected]>
Co-authored-by: zengqiang.xu <[email protected]>
Co-authored-by: Matteo Merli <[email protected]>
  • Loading branch information
13 people authored Mar 21, 2023
1 parent 0a2c4ae commit 6fcf22c
Show file tree
Hide file tree
Showing 129 changed files with 2,350 additions and 4,588 deletions.
16 changes: 5 additions & 11 deletions .github/workflows/bk-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -116,13 +116,13 @@ jobs:
module: bookkeeper-server
flag: client
test_args: "-Dtest='org.apache.bookkeeper.client.**'"
- step_name: Remaining Tests
module: bookkeeper-server
flag: remaining
test_args: "-Dtest='org.apache.bookkeeper.replication.**'"
- step_name: Replication Tests
module: bookkeeper-server
flag: replication
test_args: "-Dtest='org.apache.bookkeeper.replication.**'"
- step_name: Remaining Tests
module: bookkeeper-server
flag: remaining
test_args: "-Dtest='!org.apache.bookkeeper.client.**,!org.apache.bookkeeper.bookie.**,!org.apache.bookkeeper.replication.**,!org.apache.bookkeeper.tls.**'"
- step_name: TLS Tests
module: bookkeeper-server
Expand Down Expand Up @@ -178,13 +178,7 @@ jobs:
if [[ ! -z "${{ matrix.module }}" ]]; then
projects_list="-pl ${{ matrix.module }}"
fi
mvn -Pcode-coverage -B -nsu $projects_list verify ${{ matrix.test_args }}
- name: Upload coverage to Codecov
if: ${{ matrix.flag }} != 'shell'
uses: codecov/codecov-action@v3
with:
flags: ${{ matrix.flag }}
mvn -B -nsu $projects_list verify ${{ matrix.test_args }}
- name: Aggregates all test reports to ./test-reports and ./surefire-reports directories
if: ${{ always() }}
Expand Down
85 changes: 85 additions & 0 deletions .github/workflows/bk-streamstorage-python.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

name: BookKeeper StreamStorage Python Client
on:
pull_request:
branches:
- master
- branch-*
paths:
- 'stream/**'
- '.github/workflows/bk-streamstorage-python.yml'
push:
branches:
- master
- branch-*
paths:
- 'stream/**'
- '.github/workflows/bk-streamstorage-python.yml'

jobs:
stream-storage-python-client-unit-tests:
name: StreamStorage Python Client Unit Tests
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: checkout
uses: actions/checkout@v3
- name: Tune Runner VM
uses: ./.github/actions/tune-runner-vm
- name: Test
run: ./stream/clients/python/scripts/test.sh


Stream-storage-python-client-integration-tests:
name: StreamStorage Python Client Integration Tests
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: checkout
uses: actions/checkout@v3
- name: Tune Runner VM
uses: ./.github/actions/tune-runner-vm
- name: Cache local Maven repository
id: cache
uses: actions/cache@v3
with:
path: |
~/.m2/repository/*/*/*
!~/.m2/repository/org/apache/bookkeeper
!~/.m2/repository/org/apache/distributedlog
key: ${{ runner.os }}-bookkeeper-all-${{ hashFiles('**/pom.xml') }}
- name: Set up JDK 11
uses: actions/setup-java@v2
with:
distribution: 'temurin'
java-version: 11
- name: Set up Maven
uses: apache/pulsar-test-infra/setup-maven@master
with:
maven-version: 3.8.7
- name: Build
run: mvn -q -T 1C -B -nsu clean install -DskipTests -Dcheckstyle.skip -Dspotbugs.skip -Drat.skip -Dmaven.javadoc.skip
- name: Build Test image
run: ./stream/clients/python/docker/build-local-image.sh
- name: Test
run: ./stream/clients/python/scripts/docker_integration_tests.sh


2 changes: 2 additions & 0 deletions bin/bkperf
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,8 @@ CLI_LOG_FILE=${CLI_LOG_FILE:-"bkperf.log"}
CLI_ROOT_LOG_LEVEL=${CLI_ROOT_LOG_LEVEL:-"INFO"}
CLI_ROOT_LOG_APPENDER=${CLI_ROOT_LOG_APPENDER:-"CONSOLE"}

mkdir -p ${CLI_LOG_DIR}

# Configure the classpath
CLI_CLASSPATH="$CLI_JAR:$CLI_CLASSPATH:$CLI_EXTRA_CLASSPATH"
CLI_CLASSPATH="`dirname $CLI_LOG_CONF`:$CLI_CLASSPATH"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,9 @@ public BenchThroughputLatency(int ensemble, int writeQuorumSize, int ackQuorumSi
Random rand = new Random();
public void close() throws InterruptedException, BKException {
for (int i = 0; i < numberOfLedgers; i++) {
lh[i].close();
if (lh[i] != null) {
lh[i].close();
}
}
bk.close();
}
Expand Down Expand Up @@ -257,6 +259,8 @@ public static void main(String[] args)
options.addOption("skipwarmup", false, "Skip warm up, default false");
options.addOption("sendlimit", true, "Max number of entries to send. Default 20000000");
options.addOption("latencyFile", true, "File to dump latencies. Default is latencyDump.dat");
options.addOption("useV2", false, "Whether use V2 protocol to send requests to the bookie server");
options.addOption("warmupMessages", true, "Number of messages to warm up. Default 10000");
options.addOption("help", false, "This message");

CommandLineParser parser = new PosixParser();
Expand All @@ -281,6 +285,7 @@ public static void main(String[] args)
}
int throttle = Integer.parseInt(cmd.getOptionValue("throttle", "10000"));
int sendLimit = Integer.parseInt(cmd.getOptionValue("sendlimit", "20000000"));
int warmupMessages = Integer.parseInt(cmd.getOptionValue("warmupMessages", "10000"));

final int sockTimeout = Integer.parseInt(cmd.getOptionValue("sockettimeout", "5"));

Expand Down Expand Up @@ -321,11 +326,15 @@ public void run() {
ClientConfiguration conf = new ClientConfiguration();
conf.setThrottleValue(throttle).setReadTimeout(sockTimeout).setZkServers(servers);

if (cmd.hasOption("useV2")) {
conf.setUseV2WireProtocol(true);
}

if (!cmd.hasOption("skipwarmup")) {
long throughput;
LOG.info("Starting warmup");

throughput = warmUp(data, ledgers, ensemble, quorum, passwd, conf);
throughput = warmUp(data, ledgers, ensemble, quorum, passwd, warmupMessages, conf);
LOG.info("Warmup tp: " + throughput);
LOG.info("Warmup phase finished");
}
Expand Down Expand Up @@ -438,7 +447,7 @@ private static double percentile(long[] latency, int percentile) {
* <p>TODO: update benchmark to use metadata service uri {@link https://github.com/apache/bookkeeper/issues/1331}
*/
private static long warmUp(byte[] data, int ledgers, int ensemble, int qSize,
byte[] passwd, ClientConfiguration conf)
byte[] passwd, int warmupMessages, ClientConfiguration conf)
throws KeeperException, IOException, InterruptedException, BKException {
final CountDownLatch connectLatch = new CountDownLatch(1);
final int bookies;
Expand All @@ -465,7 +474,7 @@ public void process(WatchedEvent event) {
}

BenchThroughputLatency warmup = new BenchThroughputLatency(bookies, bookies, bookies, passwd,
ledgers, 10000, conf);
ledgers, warmupMessages, conf);
warmup.setEntryData(data);
Thread thread = new Thread(warmup);
thread.start();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,12 @@
*/
package org.apache.bookkeeper.common.allocator;

import lombok.extern.slf4j.Slf4j;

/**
* Define the policy for the Netty leak detector.
*/
@Slf4j
public enum LeakDetectionPolicy {

/**
Expand All @@ -43,5 +46,17 @@ public enum LeakDetectionPolicy {
* stack traces of places where the buffer was used. Introduce very
* significant overhead.
*/
Paranoid,
Paranoid;

public static LeakDetectionPolicy parseLevel(String levelStr) {
String trimmedLevelStr = levelStr.trim();
for (LeakDetectionPolicy policy : values()) {
if (trimmedLevelStr.equalsIgnoreCase(policy.name())) {
return policy;
}
}
log.warn("Parse leak detection policy level {} failed. Use the default level: {}", levelStr,
LeakDetectionPolicy.Disabled.name());
return LeakDetectionPolicy.Disabled;
}
}
2 changes: 2 additions & 0 deletions bookkeeper-dist/src/main/resources/LICENSE-all.bin.txt
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,7 @@ Apache Software License, Version 2.
- lib/org.xerial.snappy-snappy-java-1.1.7.7.jar [50]
- lib/io.reactivex.rxjava3-rxjava-3.0.1.jar [51]
- lib/org.hdrhistogram-HdrHistogram-2.1.10.jar [52]
- lib/com.carrotsearch-hppc-0.9.1.jar [53]

[1] Source available at https://github.com/FasterXML/jackson-annotations/tree/jackson-annotations-2.13.4
[2] Source available at https://github.com/FasterXML/jackson-core/tree/jackson-core-2.13.4
Expand Down Expand Up @@ -369,6 +370,7 @@ Apache Software License, Version 2.
[50] Source available at https://github.com/google/snappy/releases/tag/1.1.7.7
[51] Source available at https://github.com/ReactiveX/RxJava/tree/v3.0.1
[52] Source available at https://github.com/HdrHistogram/HdrHistogram/tree/HdrHistogram-2.1.10
[53] Source available at https://github.com/carrotsearch/hppc/tree/0.9.1

------------------------------------------------------------------------------------
lib/io.netty-netty-codec-4.1.89.Final.jar bundles some 3rd party dependencies
Expand Down
2 changes: 2 additions & 0 deletions bookkeeper-dist/src/main/resources/LICENSE-bkctl.bin.txt
Original file line number Diff line number Diff line change
Expand Up @@ -291,6 +291,7 @@ Apache Software License, Version 2.
- lib/org.conscrypt-conscrypt-openjdk-uber-2.5.1.jar [49]
- lib/org.xerial.snappy-snappy-java-1.1.7.7.jar [50]
- lib/io.reactivex.rxjava3-rxjava-3.0.1.jar [51]
- lib/com.carrotsearch-hppc-0.9.1.jar [52]

[1] Source available at https://github.com/FasterXML/jackson-annotations/tree/jackson-annotations-2.13.4
[2] Source available at https://github.com/FasterXML/jackson-core/tree/jackson-core-2.13.4
Expand Down Expand Up @@ -331,6 +332,7 @@ Apache Software License, Version 2.
[49] Source available at https://github.com/google/conscrypt/releases/tag/2.5.1
[50] Source available at https://github.com/google/snappy/releases/tag/1.1.7.7
[51] Source available at https://github.com/ReactiveX/RxJava/tree/v3.0.1
[52] Source available at https://github.com/carrotsearch/hppc/tree/0.9.1

------------------------------------------------------------------------------------
lib/io.netty-netty-codec-4.1.89.Final.jar bundles some 3rd party dependencies
Expand Down
2 changes: 2 additions & 0 deletions bookkeeper-dist/src/main/resources/LICENSE-server.bin.txt
Original file line number Diff line number Diff line change
Expand Up @@ -316,6 +316,7 @@ Apache Software License, Version 2.
- lib/org.conscrypt-conscrypt-openjdk-uber-2.5.1.jar [49]
- lib/org.xerial.snappy-snappy-java-1.1.7.7.jar [50]
- lib/io.reactivex.rxjava3-rxjava-3.0.1.jar [51]
- lib/com.carrotsearch-hppc-0.9.1.jar [52]

[1] Source available at https://github.com/FasterXML/jackson-annotations/tree/jackson-annotations-2.13.4
[2] Source available at https://github.com/FasterXML/jackson-core/tree/jackson-core-2.13.4
Expand Down Expand Up @@ -364,6 +365,7 @@ Apache Software License, Version 2.
[49] Source available at https://github.com/google/conscrypt/releases/tag/2.5.1
[50] Source available at https://github.com/google/snappy/releases/tag/1.1.7.7
[51] Source available at https://github.com/ReactiveX/RxJava/tree/v3.0.1
[52] Source available at https://github.com/carrotsearch/hppc/tree/0.9.1

------------------------------------------------------------------------------------
lib/io.netty-netty-codec-4.1.89.Final.jar bundles some 3rd party dependencies
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ public abstract class HttpRouter<Handler> {
public static final String BOOKIE_IS_READY = "/api/v1/bookie/is_ready";
public static final String BOOKIE_INFO = "/api/v1/bookie/info";
public static final String CLUSTER_INFO = "/api/v1/bookie/cluster_info";
public static final String ENTRY_LOCATION_COMPACT = "/api/v1/bookie/entry_location_compact";
// autorecovery
public static final String AUTORECOVERY_STATUS = "/api/v1/autorecovery/status";
public static final String RECOVERY_BOOKIE = "/api/v1/autorecovery/bookie";
Expand Down Expand Up @@ -97,6 +98,8 @@ public HttpRouter(AbstractHttpHandlerFactory<Handler> handlerFactory) {
handlerFactory.newHandler(HttpServer.ApiType.SUSPEND_GC_COMPACTION));
this.endpointHandlers.put(RESUME_GC_COMPACTION,
handlerFactory.newHandler(HttpServer.ApiType.RESUME_GC_COMPACTION));
this.endpointHandlers.put(ENTRY_LOCATION_COMPACT,
handlerFactory.newHandler(HttpServer.ApiType.TRIGGER_ENTRY_LOCATION_COMPACT));

// autorecovery
this.endpointHandlers.put(AUTORECOVERY_STATUS, handlerFactory
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ enum StatusCode {
BAD_REQUEST(400),
FORBIDDEN(403),
NOT_FOUND(404),
METHOD_NOT_ALLOWED(405),
INTERNAL_ERROR(500),
SERVICE_UNAVAILABLE(503);

Expand Down Expand Up @@ -89,6 +90,7 @@ enum ApiType {
CLUSTER_INFO,
RESUME_GC_COMPACTION,
SUSPEND_GC_COMPACTION,
TRIGGER_ENTRY_LOCATION_COMPACT,
// autorecovery
AUTORECOVERY_STATUS,
RECOVERY_BOOKIE,
Expand Down
4 changes: 4 additions & 0 deletions bookkeeper-server/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,10 @@
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.carrotsearch</groupId>
<artifactId>hppc</artifactId>
</dependency>
<!-- testing dependencies -->
<dependency>
<groupId>org.apache.bookkeeper</groupId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@
import java.util.PrimitiveIterator;
import java.util.concurrent.CompletableFuture;
import org.apache.bookkeeper.common.util.Watcher;
import org.apache.bookkeeper.processor.RequestProcessor;
import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback;

/**
Expand Down Expand Up @@ -87,8 +86,6 @@ void cancelWaitForLastAddConfirmedUpdate(long ledgerId,
// TODO: Should be constructed and passed in as a parameter
LedgerStorage getLedgerStorage();

void setRequestProcessor(RequestProcessor requestProcessor);

// TODO: Move this exceptions somewhere else
/**
* Exception is thrown when no such a ledger is found in this bookie.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,6 @@
import org.apache.bookkeeper.net.BookieId;
import org.apache.bookkeeper.net.BookieSocketAddress;
import org.apache.bookkeeper.net.DNS;
import org.apache.bookkeeper.processor.RequestProcessor;
import org.apache.bookkeeper.proto.BookkeeperInternalCallbacks.WriteCallback;
import org.apache.bookkeeper.stats.NullStatsLogger;
import org.apache.bookkeeper.stats.StatsLogger;
Expand Down Expand Up @@ -1282,11 +1281,4 @@ public OfLong getListOfEntriesOfLedger(long ledgerId) throws IOException, NoLedg
}
}
}

@Override
public void setRequestProcessor(RequestProcessor requestProcessor) {
for (Journal journal : journals) {
journal.setRequestProcessor(requestProcessor);
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,15 @@
import static java.nio.charset.StandardCharsets.UTF_8;
import static org.apache.bookkeeper.meta.MetadataDrivers.runFunctionWithRegistrationManager;

import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.Lists;
import com.google.common.util.concurrent.UncheckedExecutionException;
import java.io.File;
import java.io.FilenameFilter;
import java.io.IOException;
import java.net.MalformedURLException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
Expand Down Expand Up @@ -96,10 +98,17 @@ public boolean accept(File dir, String name) {
}
};

private static List<File> getAllDirectories(ServerConfiguration conf) {
@VisibleForTesting
public static List<File> getAllDirectories(ServerConfiguration conf) {
List<File> dirs = new ArrayList<>();
dirs.addAll(Lists.newArrayList(conf.getJournalDirs()));
Collections.addAll(dirs, conf.getLedgerDirs());
final File[] ledgerDirs = conf.getLedgerDirs();
final File[] indexDirs = conf.getIndexDirs();
if (indexDirs != null && indexDirs.length == ledgerDirs.length
&& !Arrays.asList(indexDirs).containsAll(Arrays.asList(ledgerDirs))) {
dirs.addAll(Lists.newArrayList(indexDirs));
}
Collections.addAll(dirs, ledgerDirs);
return dirs;
}

Expand Down
Loading

0 comments on commit 6fcf22c

Please sign in to comment.