Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerfile & default_config have fallen behind featureset #395

Open
cryptosig opened this issue Oct 20, 2020 · 0 comments
Open

Dockerfile & default_config have fallen behind featureset #395

cryptosig opened this issue Oct 20, 2020 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@cryptosig
Copy link

The Dockerfile and docker/default_config.ini have fallen behind the default created config.ini, not supporting logging.ini, and new features such as missing:

# Keep only those operations in memory that are related to account history tracking
partial-operations = 1

# Maximum number of operations per account will be kept in memory
max-ops-per-account = 100

# Elastic Search database node url(http://localhost:9200/)
# elasticsearch-node-url = 

# Number of bulk documents to index on replay(10000)
# elasticsearch-bulk-replay = 

# Number of bulk documents to index on a syncronied chain(100)
# elasticsearch-bulk-sync = 

# Use visitor to index additional data(slows down the replay(false))
# elasticsearch-visitor = 

# Pass basic auth to elasticsearch database('')
# elasticsearch-basic-auth = 

# Add a prefix to the index(peerplays-)
# elasticsearch-index-prefix = 

# Save operation as object(false)
# elasticsearch-operation-object = 

# Start doing ES job after block(0)
# elasticsearch-start-es-after-block = 

# Save operation as string. Needed to serve history api calls(true)
# elasticsearch-operation-string = 

# Mode of operation: only_save(0), only_query(1), all(2) - Default: 0
# elasticsearch-mode = 

# Elasticsearch node url(http://localhost:9200/)
# es-objects-elasticsearch-url = 

# Basic auth username:password('')
# es-objects-auth = 

# Number of bulk documents to index on replay(10000)
# es-objects-bulk-replay = 

# Number of bulk documents to index on a synchronized chain(100)
# es-objects-bulk-sync = 

# Store proposal objects(true)
# es-objects-proposals = 

# Store account objects(true)
# es-objects-accounts = 

# Store asset objects(true)
# es-objects-assets = 

# Store balances objects(true)
# es-objects-balances = 

# Store limit order objects(true)
# es-objects-limit-orders = 

# Store feed data(true)
# es-objects-asset-bitasset = 

# Add a prefix to the index(ppobjects-)
# es-objects-index-prefix = 

# Keep only current state of the objects(true)
# es-objects-keep-only-current = 

# Start doing ES job after block(0)
# es-objects-start-es-after-block = 

and

# Block number after which to do a snapshot
# snapshot-at-block = 

# Block time (ISO format) after which to do a snapshot
# snapshot-at-time = 

# Pathname of JSON file where to store the snapshot
# snapshot-to = 


# ==============================================================================
# logging options
# ==============================================================================
#
# Logging configuration is loaded from logging.ini by default.
# If logging.ini exists, logging configuration added in this file will be ignored.

Propose to update the default_config.ini file to more accurately reflect the newer config.ini and logging.ini
file that gets created by default now, and add to the Dockerfile the following line:

ADD docker/default_logging.ini /etc/peerplays/logging.ini
@bobinson bobinson added the bug Something isn't working label Oct 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants